Feb 03 12:05:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 12:05:26 crc restorecon[4674]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:26 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 12:05:27 crc restorecon[4674]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 12:05:27 crc kubenswrapper[4679]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.977506 4679 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983473 4679 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983509 4679 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983517 4679 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983523 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983528 4679 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983534 4679 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983538 4679 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983542 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983546 4679 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983550 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983554 4679 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983558 4679 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983562 4679 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983567 4679 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983571 4679 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983575 4679 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983580 4679 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983584 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983588 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983593 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983598 4679 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983603 4679 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983607 4679 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983611 4679 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983615 4679 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983620 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983624 4679 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983628 4679 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983632 4679 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983636 4679 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983642 4679 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983655 4679 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983660 4679 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983665 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983670 4679 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983675 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983679 4679 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983686 4679 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983690 4679 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983696 4679 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983700 4679 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983704 4679 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983709 4679 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983713 4679 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983717 4679 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983722 4679 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983726 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983732 4679 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983737 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983742 4679 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983746 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983751 4679 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983755 4679 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983759 4679 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983764 4679 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983768 4679 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983774 4679 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983779 4679 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983783 4679 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983788 4679 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983794 4679 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983798 4679 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983803 4679 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983807 4679 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983812 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983817 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983822 4679 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983826 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983830 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983839 4679 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.983847 4679 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984879 4679 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984900 4679 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984918 4679 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984927 4679 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984935 4679 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984940 4679 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984949 4679 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984957 4679 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984962 4679 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984968 4679 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984975 4679 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984980 4679 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984985 4679 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984990 4679 flags.go:64] FLAG: --cgroup-root="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984994 4679 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.984998 4679 flags.go:64] FLAG: --client-ca-file="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985002 4679 flags.go:64] FLAG: --cloud-config="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985007 4679 flags.go:64] FLAG: --cloud-provider="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985011 4679 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985016 4679 flags.go:64] FLAG: --cluster-domain="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985021 4679 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985025 4679 flags.go:64] FLAG: --config-dir="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985029 4679 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985034 4679 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985041 4679 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985045 4679 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985052 4679 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985058 4679 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985063 4679 flags.go:64] FLAG: --contention-profiling="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985068 4679 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985080 4679 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985084 4679 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985089 4679 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985095 4679 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985099 4679 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985105 4679 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985110 4679 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985116 4679 flags.go:64] FLAG: --enable-server="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985121 4679 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985131 4679 flags.go:64] FLAG: --event-burst="100" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985137 4679 flags.go:64] FLAG: --event-qps="50" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985143 4679 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985148 4679 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985154 4679 flags.go:64] FLAG: --eviction-hard="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985163 4679 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985168 4679 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985175 4679 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985181 4679 flags.go:64] FLAG: --eviction-soft="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985186 4679 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985192 4679 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985198 4679 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985204 4679 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985209 4679 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985215 4679 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985220 4679 flags.go:64] FLAG: --feature-gates="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985228 4679 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985233 4679 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985239 4679 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985246 4679 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985252 4679 flags.go:64] FLAG: --healthz-port="10248" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985257 4679 flags.go:64] FLAG: --help="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985262 4679 flags.go:64] FLAG: --hostname-override="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985268 4679 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985274 4679 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985279 4679 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985283 4679 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985288 4679 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985292 4679 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985296 4679 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985301 4679 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985306 4679 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985311 4679 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985316 4679 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985320 4679 flags.go:64] FLAG: --kube-reserved="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985325 4679 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985329 4679 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985335 4679 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985339 4679 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985344 4679 flags.go:64] FLAG: --lock-file="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985349 4679 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985373 4679 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985380 4679 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985391 4679 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985396 4679 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985402 4679 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985408 4679 flags.go:64] FLAG: --logging-format="text" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985414 4679 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985420 4679 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985426 4679 flags.go:64] FLAG: --manifest-url="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985430 4679 flags.go:64] FLAG: --manifest-url-header="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985439 4679 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985445 4679 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985452 4679 flags.go:64] FLAG: --max-pods="110" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985458 4679 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985464 4679 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985469 4679 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985475 4679 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985481 4679 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985486 4679 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985491 4679 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985506 4679 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985511 4679 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985515 4679 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985520 4679 flags.go:64] FLAG: --pod-cidr="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985524 4679 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985532 4679 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985537 4679 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985542 4679 flags.go:64] FLAG: --pods-per-core="0" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985546 4679 flags.go:64] FLAG: --port="10250" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985551 4679 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985555 4679 flags.go:64] FLAG: --provider-id="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985559 4679 flags.go:64] FLAG: --qos-reserved="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985563 4679 flags.go:64] FLAG: --read-only-port="10255" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985568 4679 flags.go:64] FLAG: --register-node="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985573 4679 flags.go:64] FLAG: --register-schedulable="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985577 4679 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985586 4679 flags.go:64] FLAG: --registry-burst="10" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985590 4679 flags.go:64] FLAG: --registry-qps="5" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985597 4679 flags.go:64] FLAG: --reserved-cpus="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985601 4679 flags.go:64] FLAG: --reserved-memory="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985609 4679 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985614 4679 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985620 4679 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985625 4679 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985630 4679 flags.go:64] FLAG: --runonce="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985635 4679 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985640 4679 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985646 4679 flags.go:64] FLAG: --seccomp-default="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985651 4679 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985655 4679 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985661 4679 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985667 4679 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985674 4679 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985679 4679 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985685 4679 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985690 4679 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985695 4679 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985701 4679 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985706 4679 flags.go:64] FLAG: --system-cgroups="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985711 4679 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985720 4679 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985725 4679 flags.go:64] FLAG: --tls-cert-file="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985731 4679 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985738 4679 flags.go:64] FLAG: --tls-min-version="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985743 4679 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985748 4679 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985753 4679 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985759 4679 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985765 4679 flags.go:64] FLAG: --v="2" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985773 4679 flags.go:64] FLAG: --version="false" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985781 4679 flags.go:64] FLAG: --vmodule="" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985788 4679 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.985797 4679 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985942 4679 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985949 4679 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985953 4679 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985957 4679 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985961 4679 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985965 4679 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985968 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985973 4679 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985977 4679 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985981 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985985 4679 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985988 4679 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985992 4679 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985995 4679 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.985999 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986002 4679 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986006 4679 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986009 4679 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986013 4679 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986016 4679 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986020 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986024 4679 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986027 4679 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986031 4679 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986034 4679 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986038 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986041 4679 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986045 4679 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986048 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986053 4679 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986057 4679 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986064 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986067 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986072 4679 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986076 4679 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986080 4679 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986086 4679 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986090 4679 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986093 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986098 4679 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986101 4679 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986106 4679 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986110 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986116 4679 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986121 4679 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986126 4679 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986131 4679 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986135 4679 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986140 4679 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986145 4679 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986149 4679 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986154 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986158 4679 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986162 4679 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986167 4679 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986171 4679 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986176 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986180 4679 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986184 4679 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986189 4679 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986193 4679 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986197 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986202 4679 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986209 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986213 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986216 4679 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986220 4679 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986224 4679 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986229 4679 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986232 4679 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.986236 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.986244 4679 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.995827 4679 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.995905 4679 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996033 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996061 4679 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996070 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996079 4679 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996086 4679 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996092 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996097 4679 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996103 4679 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996109 4679 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996115 4679 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996123 4679 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996134 4679 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996140 4679 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996146 4679 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996152 4679 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996158 4679 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996164 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996171 4679 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996177 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996184 4679 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996190 4679 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996198 4679 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996205 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996211 4679 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996216 4679 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996222 4679 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996227 4679 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996233 4679 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996238 4679 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996243 4679 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996248 4679 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996286 4679 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996293 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996299 4679 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996307 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996312 4679 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996320 4679 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996326 4679 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996333 4679 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996338 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996343 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996349 4679 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996374 4679 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996380 4679 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996385 4679 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996391 4679 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996396 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996402 4679 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996407 4679 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996412 4679 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996418 4679 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996423 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996430 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996437 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996442 4679 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996447 4679 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996455 4679 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996463 4679 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996469 4679 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996475 4679 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996481 4679 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996487 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996492 4679 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996499 4679 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996504 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996510 4679 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996515 4679 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996521 4679 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996526 4679 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996532 4679 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996538 4679 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:05:27 crc kubenswrapper[4679]: I0203 12:05:27.996548 4679 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996713 4679 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996723 4679 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 12:05:27 crc kubenswrapper[4679]: W0203 12:05:27.996729 4679 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996735 4679 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996745 4679 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996752 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996758 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996764 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996770 4679 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996776 4679 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996781 4679 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996787 4679 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996792 4679 feature_gate.go:330] unrecognized feature gate: Example Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996800 4679 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996808 4679 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996813 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996819 4679 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996824 4679 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996830 4679 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996839 4679 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996848 4679 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996856 4679 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996863 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996870 4679 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996877 4679 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996884 4679 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996892 4679 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996899 4679 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996905 4679 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996912 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996920 4679 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996927 4679 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996934 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996941 4679 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996951 4679 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996957 4679 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996964 4679 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996971 4679 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996977 4679 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996984 4679 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.996992 4679 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997000 4679 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997007 4679 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997014 4679 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997021 4679 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997031 4679 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997040 4679 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997054 4679 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997061 4679 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997068 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997075 4679 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997082 4679 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997092 4679 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997104 4679 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997115 4679 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997121 4679 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997128 4679 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997138 4679 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997146 4679 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997152 4679 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997159 4679 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997165 4679 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997175 4679 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997182 4679 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997188 4679 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997195 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997202 4679 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997209 4679 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997217 4679 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997225 4679 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:27.997233 4679 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:27.997245 4679 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:27.998577 4679 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.006125 4679 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.006284 4679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.008401 4679 server.go:997] "Starting client certificate rotation" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.008433 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.008631 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 23:24:49.314656628 +0000 UTC Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.008725 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.036617 4679 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.038885 4679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.039256 4679 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.055331 4679 log.go:25] "Validated CRI v1 runtime API" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.101199 4679 log.go:25] "Validated CRI v1 image API" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.104616 4679 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.109817 4679 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-12-01-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.109851 4679 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.128516 4679 manager.go:217] Machine: {Timestamp:2026-02-03 12:05:28.125572459 +0000 UTC m=+0.600468557 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:de5ca927-c183-4b52-ac09-5efe9929986a BootID:af702107-1d4b-4aae-b3c8-60dab6d82e59 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0c:0e:f8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0c:0e:f8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4e:e7:aa Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:32:3a:b4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f8:a3:f1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c2:6d:74 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:16:8d:1f:23:ae Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:05:b0:26:bc:5a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.128765 4679 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.129015 4679 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.129492 4679 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.129690 4679 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.129726 4679 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.131328 4679 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.131368 4679 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.131899 4679 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.131937 4679 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.132716 4679 state_mem.go:36] "Initialized new in-memory state store" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.132824 4679 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.137788 4679 kubelet.go:418] "Attempting to sync node with API server" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.137827 4679 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.137856 4679 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.137880 4679 kubelet.go:324] "Adding apiserver pod source" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.137901 4679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.142584 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.142681 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.142656 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.142760 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.142931 4679 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.143948 4679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.145395 4679 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146863 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146893 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146901 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146908 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146918 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146925 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146933 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146944 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146953 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.146962 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.147003 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.147011 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.148043 4679 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.148743 4679 server.go:1280] "Started kubelet" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.149609 4679 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 12:05:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.152730 4679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.153423 4679 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.153977 4679 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.155351 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.155422 4679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.155676 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:35:56.112560644 +0000 UTC Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.155844 4679 server.go:460] "Adding debug handlers to kubelet server" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.156005 4679 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.156018 4679 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.156052 4679 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.156263 4679 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.157185 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.157375 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.157683 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="200ms" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.159395 4679 factory.go:55] Registering systemd factory Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.159421 4679 factory.go:221] Registration of the systemd container factory successfully Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.162433 4679 factory.go:153] Registering CRI-O factory Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.162485 4679 factory.go:221] Registration of the crio container factory successfully Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.162632 4679 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.162677 4679 factory.go:103] Registering Raw factory Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.162706 4679 manager.go:1196] Started watching for new ooms in manager Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.161836 4679 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.18:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890bb0f59001b68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:05:28.14868772 +0000 UTC m=+0.623583818,LastTimestamp:2026-02-03 12:05:28.14868772 +0000 UTC m=+0.623583818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.163606 4679 manager.go:319] Starting recovery of all containers Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170622 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170676 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170688 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170698 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170709 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170719 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170729 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170740 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170751 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170763 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170774 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170784 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170794 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170805 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170817 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170828 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170840 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170848 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170880 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170890 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170900 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170910 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170962 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170974 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.170986 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171001 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171019 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171034 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171046 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171056 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171067 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171077 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171086 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171119 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171132 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171144 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171153 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171161 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171171 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171180 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171190 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171203 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171216 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171228 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171240 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171252 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171263 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171274 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171283 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171293 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171304 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171313 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171326 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171336 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171348 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171422 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171433 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171444 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171456 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171464 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171474 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171483 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171492 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171505 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.171517 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174693 4679 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174766 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174789 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174804 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174819 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174833 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174847 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174861 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174875 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174889 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174906 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174921 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174937 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174952 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174971 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.174984 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175001 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175015 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175029 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175043 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175056 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175069 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175083 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175096 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175110 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175127 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175145 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175159 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175254 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175385 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175410 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175425 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175478 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175502 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175520 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175856 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175888 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175903 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175916 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175930 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175983 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.175999 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176016 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176098 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176137 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176205 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176222 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176236 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176248 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176262 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176292 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176303 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176315 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176327 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176338 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176409 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176423 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176456 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176470 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176483 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176493 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176508 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176554 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176566 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176577 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176589 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176620 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176634 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176646 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176659 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176672 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176702 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176717 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176730 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176743 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176772 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176785 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176800 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176812 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176825 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176854 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176870 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176885 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176901 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176942 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176959 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.176981 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177022 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177039 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177058 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177097 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177115 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177128 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177143 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177174 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177187 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177200 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177212 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177232 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177285 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177298 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177311 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177341 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177499 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177517 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177530 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177548 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177578 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177592 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177609 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177623 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177658 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177675 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177692 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177714 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177753 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177769 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.177884 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.178121 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.178275 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.178303 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.178319 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179039 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179062 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179076 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179094 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179133 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179148 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179161 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179173 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179208 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179223 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179235 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179249 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179262 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179297 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179311 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179323 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179334 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179346 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179390 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179406 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179420 4679 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179433 4679 reconstruct.go:97] "Volume reconstruction finished" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.179461 4679 reconciler.go:26] "Reconciler: start to sync state" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.193872 4679 manager.go:324] Recovery completed Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.205626 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.207890 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.208026 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.208099 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.208927 4679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.208996 4679 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.209023 4679 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.209448 4679 state_mem.go:36] "Initialized new in-memory state store" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.210354 4679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.210418 4679 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.210453 4679 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.210554 4679 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.211159 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.211212 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.236170 4679 policy_none.go:49] "None policy: Start" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.238215 4679 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.238261 4679 state_mem.go:35] "Initializing new in-memory state store" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.256205 4679 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.291870 4679 manager.go:334] "Starting Device Plugin manager" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.291951 4679 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.291969 4679 server.go:79] "Starting device plugin registration server" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.292594 4679 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.292687 4679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.292891 4679 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.293012 4679 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.293027 4679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.303818 4679 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.311538 4679 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.311622 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.312801 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.312883 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.312900 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.313217 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314034 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314091 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314601 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314613 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.314768 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315046 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315061 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315166 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315146 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315930 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.315946 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.316127 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.316233 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.316294 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317689 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317715 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.317925 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318033 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318540 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318573 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318616 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318674 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.318687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319448 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319468 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319479 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319695 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319730 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.319943 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.320064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.320209 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.320859 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.320892 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.320902 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.358372 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="400ms" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382522 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382584 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382645 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382693 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382712 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382727 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382744 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382758 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382776 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382890 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382969 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.382989 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.383007 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.383045 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.383068 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.393755 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.395262 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.395322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.395338 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.395401 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.396109 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.18:6443: connect: connection refused" node="crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484539 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484608 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484639 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484669 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484690 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484695 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484729 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484748 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484712 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484774 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484781 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484809 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484831 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484881 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484905 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484927 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484935 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484950 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484970 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.484973 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485000 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485026 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485047 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485053 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485071 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485077 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485094 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485125 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485149 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.485253 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.596272 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.597452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.597496 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.597508 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.597533 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.598036 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.18:6443: connect: connection refused" node="crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.643704 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.650549 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.675685 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.699452 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b495a737ca77bdc44f91ccb344eb8beee0e8056edfadc0f6522a996bef8caeee WatchSource:0}: Error finding container b495a737ca77bdc44f91ccb344eb8beee0e8056edfadc0f6522a996bef8caeee: Status 404 returned error can't find the container with id b495a737ca77bdc44f91ccb344eb8beee0e8056edfadc0f6522a996bef8caeee Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.702654 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4fe5834e48c9d9e2ce65f935312aee3a4b156ada07accfc73ef3d226e7f804c3 WatchSource:0}: Error finding container 4fe5834e48c9d9e2ce65f935312aee3a4b156ada07accfc73ef3d226e7f804c3: Status 404 returned error can't find the container with id 4fe5834e48c9d9e2ce65f935312aee3a4b156ada07accfc73ef3d226e7f804c3 Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.708141 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.708592 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3681858895e941e5ddd10d00723608b0ebac39ffb77be3b549c2fe54398c852d WatchSource:0}: Error finding container 3681858895e941e5ddd10d00723608b0ebac39ffb77be3b549c2fe54398c852d: Status 404 returned error can't find the container with id 3681858895e941e5ddd10d00723608b0ebac39ffb77be3b549c2fe54398c852d Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.712552 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:28 crc kubenswrapper[4679]: W0203 12:05:28.735087 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-855d9e4d84cb7bac2d12db6ca5020af42be5bc08757c767ec4979abc3e157930 WatchSource:0}: Error finding container 855d9e4d84cb7bac2d12db6ca5020af42be5bc08757c767ec4979abc3e157930: Status 404 returned error can't find the container with id 855d9e4d84cb7bac2d12db6ca5020af42be5bc08757c767ec4979abc3e157930 Feb 03 12:05:28 crc kubenswrapper[4679]: E0203 12:05:28.759375 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="800ms" Feb 03 12:05:28 crc kubenswrapper[4679]: I0203 12:05:28.999112 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.000725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.000758 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.000768 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.000795 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.001309 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.18:6443: connect: connection refused" node="crc" Feb 03 12:05:29 crc kubenswrapper[4679]: W0203 12:05:29.060068 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.060192 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.155817 4679 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.155901 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:40:40.61047284 +0000 UTC Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.216771 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b495a737ca77bdc44f91ccb344eb8beee0e8056edfadc0f6522a996bef8caeee"} Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.217972 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4fe5834e48c9d9e2ce65f935312aee3a4b156ada07accfc73ef3d226e7f804c3"} Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.218849 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"855d9e4d84cb7bac2d12db6ca5020af42be5bc08757c767ec4979abc3e157930"} Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.220991 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1ace364d43821bc1f362280da187b8fee216c474b8d92fcdaea3c2e4180f5ed1"} Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.222744 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3681858895e941e5ddd10d00723608b0ebac39ffb77be3b549c2fe54398c852d"} Feb 03 12:05:29 crc kubenswrapper[4679]: W0203 12:05:29.255919 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.256050 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.352329 4679 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.18:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890bb0f59001b68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:05:28.14868772 +0000 UTC m=+0.623583818,LastTimestamp:2026-02-03 12:05:28.14868772 +0000 UTC m=+0.623583818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:05:29 crc kubenswrapper[4679]: W0203 12:05:29.370327 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.370447 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.560484 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="1.6s" Feb 03 12:05:29 crc kubenswrapper[4679]: W0203 12:05:29.619312 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.619506 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.801556 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.803494 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.803548 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.803560 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:29 crc kubenswrapper[4679]: I0203 12:05:29.803601 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:29 crc kubenswrapper[4679]: E0203 12:05:29.804523 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.18:6443: connect: connection refused" node="crc" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.133757 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:05:30 crc kubenswrapper[4679]: E0203 12:05:30.135127 4679 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.155952 4679 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.156038 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:02:32.931923799 +0000 UTC Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.230967 4679 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f" exitCode=0 Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.231157 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.231150 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232021 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232066 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232079 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232633 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d" exitCode=0 Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232662 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.232803 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.233764 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.233802 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.233813 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235218 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235815 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235839 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235849 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235876 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235918 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.235924 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.236040 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.236060 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.236737 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.236760 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.236790 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.237833 4679 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518" exitCode=0 Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.237897 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.237909 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.238598 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.238629 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.238639 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.240424 4679 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef" exitCode=0 Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.240464 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef"} Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.240508 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.241628 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.241656 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:30 crc kubenswrapper[4679]: I0203 12:05:30.241664 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:30 crc kubenswrapper[4679]: W0203 12:05:30.685216 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:30 crc kubenswrapper[4679]: E0203 12:05:30.685318 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.155638 4679 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.156574 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:13:14.491673044 +0000 UTC Feb 03 12:05:31 crc kubenswrapper[4679]: E0203 12:05:31.161691 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="3.2s" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.256015 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.256177 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.256204 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.256389 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.259708 4679 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2" exitCode=0 Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.259798 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.259998 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.261863 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.261923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.261936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.263345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.263411 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.263424 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.265574 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.265612 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.265629 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.265641 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.272177 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de"} Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.272213 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.272202 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.273183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.273219 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.273232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.274017 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.274055 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.274066 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4679]: W0203 12:05:31.324314 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.18:6443: connect: connection refused Feb 03 12:05:31 crc kubenswrapper[4679]: E0203 12:05:31.324434 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.18:6443: connect: connection refused" logger="UnhandledError" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.404644 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.405980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.406005 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.406015 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:31 crc kubenswrapper[4679]: I0203 12:05:31.406041 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:31 crc kubenswrapper[4679]: E0203 12:05:31.406587 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.18:6443: connect: connection refused" node="crc" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.156698 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:26:34.594710187 +0000 UTC Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.279981 4679 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1" exitCode=0 Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.280118 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1"} Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.280127 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.281223 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.281273 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.281286 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.285308 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.285321 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682"} Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.285411 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.285467 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.285500 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286743 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286778 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286796 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286884 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.286924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.287188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.287217 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:32 crc kubenswrapper[4679]: I0203 12:05:32.287230 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.157535 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:08:11.774198995 +0000 UTC Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292574 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292610 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"efc745234ff5eb96ca3c95548186c514cd0a81fb3ed85f11b76dd508b0b2233b"} Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292640 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292660 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"267c1aff643b3ade526a8c39f0ba7c3451d6c0d799a40deb146e97b62b771a57"} Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292675 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292675 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ea6b3c2aa06006a68a19f3b6967b74249a3fb631c1ad2fd0660d2940807e6d17"} Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292882 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69c3da08082d790a6d33bd3d86d43513d13a2833f5cc0edc3f7e3abc62418b54"} Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.292935 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3ca4414eb93e2ceb2ba2ea966534c3f85ca9f237067094ee660f5e3b9daef711"} Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293540 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293574 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293584 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:33 crc kubenswrapper[4679]: I0203 12:05:33.293849 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.073322 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.073734 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.075375 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.075441 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.075453 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.157975 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:19:17.906302375 +0000 UTC Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.273010 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.295648 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.296710 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.296746 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.296756 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.607520 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.611778 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.611844 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.611860 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.611902 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.747994 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.748258 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.749707 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.749738 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:34 crc kubenswrapper[4679]: I0203 12:05:34.749749 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.159113 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:11:05.400032839 +0000 UTC Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.696939 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.697140 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.698351 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.698392 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:35 crc kubenswrapper[4679]: I0203 12:05:35.698401 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.159811 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:28:17.185552538 +0000 UTC Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.241687 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.241941 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.243440 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.243496 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.243507 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.443976 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.444181 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.444221 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.445602 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.445640 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.445654 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.627614 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.988642 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.988897 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.990243 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.990304 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:36 crc kubenswrapper[4679]: I0203 12:05:36.990319 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.120923 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.160031 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:15:14.042540684 +0000 UTC Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.304486 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.305996 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.306085 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.306110 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.388680 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.388915 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.390488 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.390838 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.390925 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:37 crc kubenswrapper[4679]: I0203 12:05:37.394194 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.160534 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:02:29.265958174 +0000 UTC Feb 03 12:05:38 crc kubenswrapper[4679]: E0203 12:05:38.304027 4679 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.306026 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.306030 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307162 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307594 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307621 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.307632 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.697042 4679 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:05:38 crc kubenswrapper[4679]: I0203 12:05:38.697146 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:05:39 crc kubenswrapper[4679]: I0203 12:05:39.160887 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:06:14.824568728 +0000 UTC Feb 03 12:05:40 crc kubenswrapper[4679]: I0203 12:05:40.161313 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:45:01.538638859 +0000 UTC Feb 03 12:05:41 crc kubenswrapper[4679]: I0203 12:05:41.162444 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:44:31.478618618 +0000 UTC Feb 03 12:05:41 crc kubenswrapper[4679]: W0203 12:05:41.892175 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:05:41 crc kubenswrapper[4679]: I0203 12:05:41.892311 4679 trace.go:236] Trace[212562342]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:05:31.890) (total time: 10001ms): Feb 03 12:05:41 crc kubenswrapper[4679]: Trace[212562342]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:05:41.892) Feb 03 12:05:41 crc kubenswrapper[4679]: Trace[212562342]: [10.00139538s] [10.00139538s] END Feb 03 12:05:41 crc kubenswrapper[4679]: E0203 12:05:41.892347 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.156219 4679 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.163396 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:14:16.389603205 +0000 UTC Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.319657 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.321813 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682" exitCode=255 Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.321859 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682"} Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.321989 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.323241 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.323298 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.323316 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.326151 4679 scope.go:117] "RemoveContainer" containerID="7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.368163 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.368808 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.370965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.371050 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.371063 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:42 crc kubenswrapper[4679]: W0203 12:05:42.389671 4679 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.389793 4679 trace.go:236] Trace[1934026285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:05:32.388) (total time: 10001ms): Feb 03 12:05:42 crc kubenswrapper[4679]: Trace[1934026285]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:05:42.389) Feb 03 12:05:42 crc kubenswrapper[4679]: Trace[1934026285]: [10.001611167s] [10.001611167s] END Feb 03 12:05:42 crc kubenswrapper[4679]: E0203 12:05:42.389822 4679 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 12:05:42 crc kubenswrapper[4679]: I0203 12:05:42.410831 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.163706 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:35:38.020369149 +0000 UTC Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.186714 4679 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.186791 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.191648 4679 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.191937 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.326458 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.327972 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe"} Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.328078 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.328089 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329399 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329434 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329448 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329611 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329718 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.329812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:43 crc kubenswrapper[4679]: I0203 12:05:43.341286 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.078431 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.078598 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.079660 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.079701 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.079709 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.163998 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:55:20.517213188 +0000 UTC Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.331103 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.331941 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.331983 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:44 crc kubenswrapper[4679]: I0203 12:05:44.331997 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:45 crc kubenswrapper[4679]: I0203 12:05:45.164241 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:01:41.847395943 +0000 UTC Feb 03 12:05:45 crc kubenswrapper[4679]: I0203 12:05:45.636377 4679 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:46 crc kubenswrapper[4679]: I0203 12:05:46.165456 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:30:14.392716308 +0000 UTC Feb 03 12:05:46 crc kubenswrapper[4679]: I0203 12:05:46.521469 4679 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:46 crc kubenswrapper[4679]: I0203 12:05:46.634600 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:46 crc kubenswrapper[4679]: I0203 12:05:46.634964 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:46 crc kubenswrapper[4679]: I0203 12:05:46.638556 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.151080 4679 apiserver.go:52] "Watching apiserver" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.156923 4679 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.157204 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.157607 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.157853 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.157983 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:47 crc kubenswrapper[4679]: E0203 12:05:47.157979 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.158122 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:47 crc kubenswrapper[4679]: E0203 12:05:47.158130 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.158205 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.158285 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:47 crc kubenswrapper[4679]: E0203 12:05:47.158351 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.160980 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.161284 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.161429 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.161755 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.162240 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.162384 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.162497 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.162687 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.163014 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.166249 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:55:43.357208513 +0000 UTC Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.189596 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.205249 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.217255 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.228606 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.240177 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.252567 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.257381 4679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.265617 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.277755 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.290307 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: I0203 12:05:47.306218 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:47 crc kubenswrapper[4679]: E0203 12:05:47.347889 4679 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.166882 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:01:50.891463616 +0000 UTC Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.194002 4679 trace.go:236] Trace[1301621545]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:05:35.194) (total time: 12999ms): Feb 03 12:05:48 crc kubenswrapper[4679]: Trace[1301621545]: ---"Objects listed" error: 12999ms (12:05:48.193) Feb 03 12:05:48 crc kubenswrapper[4679]: Trace[1301621545]: [12.999415699s] [12.999415699s] END Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.194038 4679 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.194878 4679 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.194847 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.195312 4679 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.196068 4679 trace.go:236] Trace[580049935]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 12:05:35.086) (total time: 13109ms): Feb 03 12:05:48 crc kubenswrapper[4679]: Trace[580049935]: ---"Objects listed" error: 13109ms (12:05:48.195) Feb 03 12:05:48 crc kubenswrapper[4679]: Trace[580049935]: [13.109783191s] [13.109783191s] END Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.196091 4679 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.202094 4679 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.225407 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.240100 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.244002 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.248410 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.250816 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.254192 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.262651 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.276966 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.293766 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296068 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296500 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296564 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296770 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296806 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296831 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296854 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296872 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296897 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296929 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296958 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296976 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.296996 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297025 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297042 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297061 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297078 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297099 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297116 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297138 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297157 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297151 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297178 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297199 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297215 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297233 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297254 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297271 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297285 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297301 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297315 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297334 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297351 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297381 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297402 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297427 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297450 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297472 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297494 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297518 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297539 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297559 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297577 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297598 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297619 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297638 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297659 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297677 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297707 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297725 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297743 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297760 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297776 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297792 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297807 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297822 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297841 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297907 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297940 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297961 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297978 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297995 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298013 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298031 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298049 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298066 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298083 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298104 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298122 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298140 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298158 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298179 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298200 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298218 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298275 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298297 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298314 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298330 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298345 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298382 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298404 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298420 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298438 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298457 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298475 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298492 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298507 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298524 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298540 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298557 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298576 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298598 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298622 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298647 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298704 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298730 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298754 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298774 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298798 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298840 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298865 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298890 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298913 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298942 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298969 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298994 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299018 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299041 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299064 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299087 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299110 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299289 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299324 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299351 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299401 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299434 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299460 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299486 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299513 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299538 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299570 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299600 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299664 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299695 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299726 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299750 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299797 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299816 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299833 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299852 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299869 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299890 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299908 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299927 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299947 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299970 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299989 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300009 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300056 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300075 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300093 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300109 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300131 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300149 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300169 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300187 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300208 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300229 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300248 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300268 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300286 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300306 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300326 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300345 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300398 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300420 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300438 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300472 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300493 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300513 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300533 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300552 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300570 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300593 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300610 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300629 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300649 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300674 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300692 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300710 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300730 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300751 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300773 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300793 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300832 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300858 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300876 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300896 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300920 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300939 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300957 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300976 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300994 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301013 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301030 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301050 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301069 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301087 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301107 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301126 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301146 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301162 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301183 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301200 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301219 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301240 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301260 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297398 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301311 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297570 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301327 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297585 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297849 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297893 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.297892 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298217 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298551 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298572 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298590 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298773 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298729 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298845 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.298933 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299001 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299094 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299154 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299167 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299500 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299678 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299890 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.299938 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300205 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300290 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300344 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300661 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301625 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.300952 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301005 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301216 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301706 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301929 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302002 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302219 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302237 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302828 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302848 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.301341 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302963 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.302971 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.303001 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.303065 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.303853 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.303976 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304181 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304210 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304277 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304387 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304411 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304592 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.304666 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304722 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304775 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.304857 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.305202 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.305231 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.305662 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.305900 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306249 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306433 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306590 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306645 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.306724 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:48.806700481 +0000 UTC m=+21.281596589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306724 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306780 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306929 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.306940 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.307066 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.307204 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.308204 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.308283 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.308341 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.308389 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.308668 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.309024 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.309348 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.309452 4679 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.309950 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310111 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310242 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310418 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310486 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310801 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310828 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.310933 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.311144 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.312645 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.312906 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.313734 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.314148 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.314833 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.314894 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.315471 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.315604 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.316150 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.316178 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.316719 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.317074 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.317310 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.317719 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318193 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318528 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318578 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318815 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318869 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.318927 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.319328 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.319578 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.319755 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.319957 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.320251 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.320714 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.321182 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.322403 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.322697 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.323728 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.323802 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.323907 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.323997 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.324035 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.324310 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.324411 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.324533 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.324815 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.325086 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.325223 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.325454 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.325678 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.325748 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326221 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326339 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326353 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326377 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326598 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.326731 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.327014 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.327179 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.327269 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.327561 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.328037 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.328058 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.327613 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.327739 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.328097 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.328230 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.328968 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:48.828947978 +0000 UTC m=+21.303844066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331087 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.328510 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.331182 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:48.831145541 +0000 UTC m=+21.306041769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331380 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331712 4679 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331741 4679 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331758 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331773 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331788 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331809 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331823 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.330213 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.331820 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.329153 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.329194 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.329211 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.329607 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.329771 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.330148 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.330913 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.330956 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.330964 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.332182 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.332272 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.332377 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.332403 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.332417 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.332419 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.332862 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.332982 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333210 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333482 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333794 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333846 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333901 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.333854 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.334194 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.334612 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.334719 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.334972 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.335107 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.335622 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.335670 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.335811 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.336074 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.336227 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.337242 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.337254 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.328725 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.337964 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.338117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.339334 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.339735 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.339766 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.339902 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.340174 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.340206 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.340222 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.341146 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.341838 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.341915 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342169 4679 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342204 4679 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342225 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342247 4679 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342272 4679 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342291 4679 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342428 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342450 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342464 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342479 4679 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342492 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342505 4679 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342517 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.342782 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.343229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.343482 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:48.843450084 +0000 UTC m=+21.318346172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.343555 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:48.843545876 +0000 UTC m=+21.318441964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.343883 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.343998 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344017 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344033 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344055 4679 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344070 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344086 4679 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344101 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344106 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344116 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344175 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344189 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344202 4679 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344218 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344233 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344246 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344260 4679 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344273 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344286 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344299 4679 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344312 4679 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344327 4679 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344341 4679 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344396 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344411 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344423 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344436 4679 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344451 4679 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344463 4679 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344475 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344487 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344500 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344511 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344523 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344536 4679 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344549 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344561 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344574 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344586 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344609 4679 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344623 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344636 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344651 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344672 4679 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344684 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344696 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344709 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344721 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344735 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344749 4679 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344762 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344776 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344788 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344800 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344812 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344824 4679 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344838 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344854 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344868 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344881 4679 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344725 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344895 4679 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344948 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344962 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344983 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.344995 4679 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345006 4679 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345020 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345034 4679 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345049 4679 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345063 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345077 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345090 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345103 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345116 4679 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345129 4679 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345146 4679 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345160 4679 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345173 4679 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345187 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345200 4679 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345213 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345227 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345241 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345245 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.345828 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.346024 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.349099 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.354100 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.356557 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.357607 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.358136 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.358180 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.358470 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.358588 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.364275 4679 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.367284 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.367494 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.367985 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.368169 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.369426 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.369559 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.373907 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.374577 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.374812 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.380126 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.386133 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.386534 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.390253 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.393790 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: W0203 12:05:48.403879 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6274daf70fa8a7805c10e4787dcc818f24c581f475eaa38b4db101b07f731d3d WatchSource:0}: Error finding container 6274daf70fa8a7805c10e4787dcc818f24c581f475eaa38b4db101b07f731d3d: Status 404 returned error can't find the container with id 6274daf70fa8a7805c10e4787dcc818f24c581f475eaa38b4db101b07f731d3d Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.406566 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.408102 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.425950 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.440854 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446023 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446276 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446440 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446522 4679 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446639 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446724 4679 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446803 4679 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446883 4679 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.446967 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447044 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447120 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447206 4679 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447278 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447346 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447436 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447505 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447587 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447665 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447735 4679 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447805 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447873 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447946 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448016 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448084 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448152 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448224 4679 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448292 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447622 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448427 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448507 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448537 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448550 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448562 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448596 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448609 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448621 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448633 4679 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448644 4679 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448680 4679 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448693 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448706 4679 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448718 4679 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448753 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448766 4679 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448778 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448792 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448805 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448839 4679 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448852 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448915 4679 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448932 4679 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448948 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448960 4679 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.448992 4679 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449007 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449022 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449035 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449066 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449080 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449093 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449107 4679 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449121 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449153 4679 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449166 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449178 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449190 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449202 4679 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449234 4679 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449248 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449265 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449279 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449310 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449323 4679 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449334 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449347 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449381 4679 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449395 4679 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449407 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449418 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449463 4679 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449480 4679 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449492 4679 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449505 4679 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449517 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449548 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449559 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449572 4679 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449584 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449597 4679 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449631 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449644 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449657 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449674 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449707 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.449722 4679 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.447666 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.673473 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.680676 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 12:05:48 crc kubenswrapper[4679]: W0203 12:05:48.702162 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-0b91e059cd7287e145a2bb6d5a30083eb53766bff659e9c5e9495441af14e4ae WatchSource:0}: Error finding container 0b91e059cd7287e145a2bb6d5a30083eb53766bff659e9c5e9495441af14e4ae: Status 404 returned error can't find the container with id 0b91e059cd7287e145a2bb6d5a30083eb53766bff659e9c5e9495441af14e4ae Feb 03 12:05:48 crc kubenswrapper[4679]: W0203 12:05:48.702576 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9f742fee8cb71363623c3795ba4332369e446ed89285e4518b5b21f5d96c0192 WatchSource:0}: Error finding container 9f742fee8cb71363623c3795ba4332369e446ed89285e4518b5b21f5d96c0192: Status 404 returned error can't find the container with id 9f742fee8cb71363623c3795ba4332369e446ed89285e4518b5b21f5d96c0192 Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.854169 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.854259 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.854298 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854344 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.854315191 +0000 UTC m=+22.329211289 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.854389 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.854449 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854475 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854526 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.854515977 +0000 UTC m=+22.329412065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854552 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854568 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854578 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854582 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854608 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.854599449 +0000 UTC m=+22.329495537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854623 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.85461405 +0000 UTC m=+22.329510138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854796 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854835 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854849 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: E0203 12:05:48.854930 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:49.854908668 +0000 UTC m=+22.329804756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.928408 4679 csr.go:261] certificate signing request csr-tbcd2 is approved, waiting to be issued Feb 03 12:05:48 crc kubenswrapper[4679]: I0203 12:05:48.939654 4679 csr.go:257] certificate signing request csr-tbcd2 is issued Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.168237 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:56:23.87463463 +0000 UTC Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.210695 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.210749 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.210869 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.210914 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.211047 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.211146 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.359550 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0b91e059cd7287e145a2bb6d5a30083eb53766bff659e9c5e9495441af14e4ae"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.360676 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.360736 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9f742fee8cb71363623c3795ba4332369e446ed89285e4518b5b21f5d96c0192"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.362208 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.362254 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.362272 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6274daf70fa8a7805c10e4787dcc818f24c581f475eaa38b4db101b07f731d3d"} Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.415586 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.432628 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.449483 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.471069 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.486507 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.508570 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.523257 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.542971 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.559438 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.577427 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.594903 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.617478 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.632835 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.648549 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.664952 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.680006 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:49Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.864104 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.864219 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.864245 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.864269 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.864290 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864409 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864425 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864450 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864468 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864486 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:51.864460984 +0000 UTC m=+24.339357062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864508 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:51.864496885 +0000 UTC m=+24.339392973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864550 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:51.864535006 +0000 UTC m=+24.339431094 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864575 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864594 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864746 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864765 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864701 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:51.8646735 +0000 UTC m=+24.339569588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:49 crc kubenswrapper[4679]: E0203 12:05:49.864871 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:51.864843995 +0000 UTC m=+24.339740253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.941447 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 12:00:48 +0000 UTC, rotation deadline is 2026-10-22 01:28:26.017057361 +0000 UTC Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.941523 4679 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6253h22m36.075537859s for next certificate rotation Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.996885 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dz6f8"] Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.997245 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.998386 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2zqm7"] Feb 03 12:05:49 crc kubenswrapper[4679]: I0203 12:05:49.998911 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:49.999973 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.000281 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.000592 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.001120 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.002237 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.002417 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.002668 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.002857 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.019032 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.037319 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.059526 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.079503 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.098395 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.113145 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.128016 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.142673 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.154910 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-cnibin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166804 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-kubelet\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166823 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-conf-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166843 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-socket-dir-parent\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166874 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-system-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166894 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtsf\" (UniqueName: \"kubernetes.io/projected/413e7c7d-7c01-4502-8d73-3c3df2e60956-kube-api-access-wqtsf\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166933 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-hosts-file\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166954 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frt5t\" (UniqueName: \"kubernetes.io/projected/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-kube-api-access-frt5t\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.166976 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-k8s-cni-cncf-io\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167118 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-netns\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167162 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-bin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167185 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-cni-binary-copy\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167284 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-hostroot\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167315 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-daemon-config\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167336 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-etc-kubernetes\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167369 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167437 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-multus-certs\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167503 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-os-release\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.167521 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-multus\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.168742 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:38:06.779929562 +0000 UTC Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.169964 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.183643 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.205932 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.215550 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.216256 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.217300 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.218030 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.218696 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.219258 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.219950 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.220581 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.221255 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.221886 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.222503 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.223295 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.226290 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.226877 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.228071 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.228654 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.229827 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.230386 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.231031 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.232621 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.233095 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.233734 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.234629 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.235294 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.236242 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.236998 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.239302 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.241035 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.241692 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.243066 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.243784 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.244332 4679 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.244940 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.247420 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.248189 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.249455 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.251441 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.252326 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.253722 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.254619 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.255918 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.256620 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.257963 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.258708 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.260150 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.260799 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.262060 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.262962 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.264448 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.265120 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.266271 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.266953 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.267554 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268548 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-k8s-cni-cncf-io\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268611 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-hosts-file\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268636 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frt5t\" (UniqueName: \"kubernetes.io/projected/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-kube-api-access-frt5t\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268659 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-netns\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268678 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-bin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268685 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268700 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-cni-binary-copy\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268734 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-hostroot\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268755 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-daemon-config\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268774 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268795 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-multus-certs\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268815 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-etc-kubernetes\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268833 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-os-release\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268854 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-multus\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268862 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-bin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268862 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-netns\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268944 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268880 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-cnibin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268981 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-multus-certs\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269016 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-cni-multus\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269019 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-socket-dir-parent\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268952 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-cnibin\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269057 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-hostroot\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269068 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-kubelet\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269095 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-conf-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269117 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtsf\" (UniqueName: \"kubernetes.io/projected/413e7c7d-7c01-4502-8d73-3c3df2e60956-kube-api-access-wqtsf\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269165 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-system-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268937 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-hosts-file\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269265 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-os-release\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269265 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-var-lib-kubelet\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.268986 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-etc-kubernetes\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269286 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269320 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-conf-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269380 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-socket-dir-parent\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269382 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-system-cni-dir\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.269948 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-multus-daemon-config\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.270031 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/413e7c7d-7c01-4502-8d73-3c3df2e60956-cni-binary-copy\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.270108 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/413e7c7d-7c01-4502-8d73-3c3df2e60956-host-run-k8s-cni-cncf-io\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.292558 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.296285 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtsf\" (UniqueName: \"kubernetes.io/projected/413e7c7d-7c01-4502-8d73-3c3df2e60956-kube-api-access-wqtsf\") pod \"multus-2zqm7\" (UID: \"413e7c7d-7c01-4502-8d73-3c3df2e60956\") " pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.301803 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frt5t\" (UniqueName: \"kubernetes.io/projected/04ed4bc1-0ae0-4644-95d5-384077e1bcf9-kube-api-access-frt5t\") pod \"node-resolver-dz6f8\" (UID: \"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\") " pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.311380 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dz6f8" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.317922 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2zqm7" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.326402 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.334706 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04ed4bc1_0ae0_4644_95d5_384077e1bcf9.slice/crio-ed5fdb8d0639f661d9be2d5d257aa8b45258a21a79927faec500d73e69601386 WatchSource:0}: Error finding container ed5fdb8d0639f661d9be2d5d257aa8b45258a21a79927faec500d73e69601386: Status 404 returned error can't find the container with id ed5fdb8d0639f661d9be2d5d257aa8b45258a21a79927faec500d73e69601386 Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.350184 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.366449 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dz6f8" event={"ID":"04ed4bc1-0ae0-4644-95d5-384077e1bcf9","Type":"ContainerStarted","Data":"ed5fdb8d0639f661d9be2d5d257aa8b45258a21a79927faec500d73e69601386"} Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.371121 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerStarted","Data":"0bdf780a41061e314ec28672d18532be81579c58345334005db94e4abba33a61"} Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.374482 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.398773 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.398864 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8qvcg"] Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.399317 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.399974 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7ws5"] Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.401147 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7f55n"] Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.401590 4679 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 03 12:05:50 crc kubenswrapper[4679]: E0203 12:05:50.401630 4679 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.401737 4679 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 03 12:05:50 crc kubenswrapper[4679]: E0203 12:05:50.401753 4679 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.401788 4679 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.401799 4679 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 03 12:05:50 crc kubenswrapper[4679]: E0203 12:05:50.401811 4679 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:05:50 crc kubenswrapper[4679]: E0203 12:05:50.401812 4679 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.401897 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.401898 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.401903 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.405617 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.405839 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406185 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406323 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406453 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406580 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406716 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.406885 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.410856 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.422320 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.444680 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.459811 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.472945 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.493003 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.506920 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.520793 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.542876 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.557872 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571730 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571771 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571789 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571811 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571830 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571847 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.571910 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572000 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572023 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572040 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xhpj\" (UniqueName: \"kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572069 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572086 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572102 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572117 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-binary-copy\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572141 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572156 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572171 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572246 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572263 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572280 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572295 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-system-cni-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572319 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572337 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572351 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572409 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572609 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572636 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9r9w\" (UniqueName: \"kubernetes.io/projected/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-kube-api-access-v9r9w\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572706 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-os-release\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572728 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8lm\" (UniqueName: \"kubernetes.io/projected/e65f2e85-782a-4313-b584-e3f1c9c8cf76-kube-api-access-qt8lm\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.572941 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-rootfs\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.573080 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cnibin\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.575641 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.596640 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.620034 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.636807 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.657030 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:50Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.674971 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675038 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675098 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675163 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675212 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-system-cni-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675179 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675301 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675333 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-system-cni-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675342 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675400 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675333 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675457 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675400 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675517 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675546 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675571 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675590 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9r9w\" (UniqueName: \"kubernetes.io/projected/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-kube-api-access-v9r9w\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675614 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-os-release\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675650 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-rootfs\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675674 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cnibin\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675693 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8lm\" (UniqueName: \"kubernetes.io/projected/e65f2e85-782a-4313-b584-e3f1c9c8cf76-kube-api-access-qt8lm\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675716 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675738 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675758 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675775 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675797 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675813 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675851 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675867 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675894 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675912 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xhpj\" (UniqueName: \"kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675951 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675975 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.675996 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676015 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-binary-copy\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676045 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676062 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676353 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676416 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676467 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676508 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676510 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cnibin\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676537 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676802 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676865 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676955 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676966 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-rootfs\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677010 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.676875 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677054 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677075 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677096 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65f2e85-782a-4313-b584-e3f1c9c8cf76-os-release\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677104 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677135 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677170 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.677658 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65f2e85-782a-4313-b584-e3f1c9c8cf76-cni-binary-copy\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.680869 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.699145 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8lm\" (UniqueName: \"kubernetes.io/projected/e65f2e85-782a-4313-b584-e3f1c9c8cf76-kube-api-access-qt8lm\") pod \"multus-additional-cni-plugins-7f55n\" (UID: \"e65f2e85-782a-4313-b584-e3f1c9c8cf76\") " pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.704676 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xhpj\" (UniqueName: \"kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj\") pod \"ovnkube-node-b7ws5\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.739450 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7f55n" Feb 03 12:05:50 crc kubenswrapper[4679]: I0203 12:05:50.749038 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:50 crc kubenswrapper[4679]: W0203 12:05:50.760428 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2691dd3_b30a_4cf3_8bff_1d84cb36b3fa.slice/crio-1a98b8bcae840432f85cf6a5750c3c3e8904fc56c9152cf740898c2d4ce9dd10 WatchSource:0}: Error finding container 1a98b8bcae840432f85cf6a5750c3c3e8904fc56c9152cf740898c2d4ce9dd10: Status 404 returned error can't find the container with id 1a98b8bcae840432f85cf6a5750c3c3e8904fc56c9152cf740898c2d4ce9dd10 Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.169152 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:29:14.694234142 +0000 UTC Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.211409 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.211454 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.211429 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.211584 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.211704 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.211791 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.375179 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerStarted","Data":"2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.376541 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f" exitCode=0 Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.376619 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.376664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"1a98b8bcae840432f85cf6a5750c3c3e8904fc56c9152cf740898c2d4ce9dd10"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.378146 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dz6f8" event={"ID":"04ed4bc1-0ae0-4644-95d5-384077e1bcf9","Type":"ContainerStarted","Data":"93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.379604 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0" exitCode=0 Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.379683 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.379712 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerStarted","Data":"82ffb794e8ec895856e941b5c3fe722e95cc7b7ca461ab215b7080fe904ec328"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.381281 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d"} Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.394892 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.411819 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.426327 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.438590 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.450632 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.466044 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.489310 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.499493 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.502627 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.506607 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9r9w\" (UniqueName: \"kubernetes.io/projected/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-kube-api-access-v9r9w\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.510064 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.526458 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.543443 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.563285 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.585612 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.607111 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.623050 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.642925 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.657575 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.671805 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.677654 4679 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.677723 4679 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.677776 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls podName:6483dca4-cab1-4db4-9aa9-0b616c6e9cbb nodeName:}" failed. No retries permitted until 2026-02-03 12:05:52.177742748 +0000 UTC m=+24.652638836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls") pod "machine-config-daemon-8qvcg" (UID: "6483dca4-cab1-4db4-9aa9-0b616c6e9cbb") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.677813 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config podName:6483dca4-cab1-4db4-9aa9-0b616c6e9cbb nodeName:}" failed. No retries permitted until 2026-02-03 12:05:52.177789079 +0000 UTC m=+24.652685337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config") pod "machine-config-daemon-8qvcg" (UID: "6483dca4-cab1-4db4-9aa9-0b616c6e9cbb") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.686037 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.692532 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.700899 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.714982 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.730847 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.746560 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.760302 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.770510 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.782905 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.799055 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:51Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.889464 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.889592 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889628 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:05:55.889599128 +0000 UTC m=+28.364495216 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.889664 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.889712 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.889742 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889753 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889773 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889789 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889835 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889850 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889860 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889839 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:55.889828504 +0000 UTC m=+28.364724592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889897 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:55.889888076 +0000 UTC m=+28.364784164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889907 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889944 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.889961 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:55.889950018 +0000 UTC m=+28.364846106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: E0203 12:05:51.890142 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:05:55.890095202 +0000 UTC m=+28.364991440 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:51 crc kubenswrapper[4679]: I0203 12:05:51.932153 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.170342 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:20:43.32081179 +0000 UTC Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.193944 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.194004 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.194796 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-mcd-auth-proxy-config\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.199036 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6483dca4-cab1-4db4-9aa9-0b616c6e9cbb-proxy-tls\") pod \"machine-config-daemon-8qvcg\" (UID: \"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\") " pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.228099 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:05:52 crc kubenswrapper[4679]: W0203 12:05:52.242430 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6483dca4_cab1_4db4_9aa9_0b616c6e9cbb.slice/crio-77062a17fbbf0a9b45d6b089d46e82e1e01a501b0cd751c25ea676367c14cc82 WatchSource:0}: Error finding container 77062a17fbbf0a9b45d6b089d46e82e1e01a501b0cd751c25ea676367c14cc82: Status 404 returned error can't find the container with id 77062a17fbbf0a9b45d6b089d46e82e1e01a501b0cd751c25ea676367c14cc82 Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.388799 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerStarted","Data":"ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.395473 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"77062a17fbbf0a9b45d6b089d46e82e1e01a501b0cd751c25ea676367c14cc82"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.400632 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.400655 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.400666 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.400677 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.400687 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3"} Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.427607 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.453383 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.472340 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.489686 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.505774 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.519677 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.537878 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.561495 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.578907 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.592625 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.607841 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.623419 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:52 crc kubenswrapper[4679]: I0203 12:05:52.641928 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:52Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.171492 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:02:10.792799485 +0000 UTC Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.211089 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.211137 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.211204 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:53 crc kubenswrapper[4679]: E0203 12:05:53.211625 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:53 crc kubenswrapper[4679]: E0203 12:05:53.211710 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:53 crc kubenswrapper[4679]: E0203 12:05:53.211820 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.408053 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0"} Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.409880 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642" exitCode=0 Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.409967 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642"} Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.411781 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e"} Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.411820 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd"} Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.426502 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.448819 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.469427 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.484983 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.487723 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-n4gcf"] Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.488259 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.489906 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.491663 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.492202 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.492403 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.500308 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.517936 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.530184 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.545172 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.561491 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.579395 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.592641 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.611181 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.611501 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a0d56c1-f4af-457d-a63d-2bef7730f28a-serviceca\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.611579 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a0d56c1-f4af-457d-a63d-2bef7730f28a-host\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.611616 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg54l\" (UniqueName: \"kubernetes.io/projected/4a0d56c1-f4af-457d-a63d-2bef7730f28a-kube-api-access-cg54l\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.627540 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.642203 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.657402 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.673147 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.692827 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.712645 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a0d56c1-f4af-457d-a63d-2bef7730f28a-host\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.712719 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg54l\" (UniqueName: \"kubernetes.io/projected/4a0d56c1-f4af-457d-a63d-2bef7730f28a-kube-api-access-cg54l\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.712782 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a0d56c1-f4af-457d-a63d-2bef7730f28a-serviceca\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.712829 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a0d56c1-f4af-457d-a63d-2bef7730f28a-host\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.714204 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a0d56c1-f4af-457d-a63d-2bef7730f28a-serviceca\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.716811 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.731948 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg54l\" (UniqueName: \"kubernetes.io/projected/4a0d56c1-f4af-457d-a63d-2bef7730f28a-kube-api-access-cg54l\") pod \"node-ca-n4gcf\" (UID: \"4a0d56c1-f4af-457d-a63d-2bef7730f28a\") " pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.734072 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.748320 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.761205 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.773265 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.785376 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.797440 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.799939 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n4gcf" Feb 03 12:05:53 crc kubenswrapper[4679]: W0203 12:05:53.814034 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0d56c1_f4af_457d_a63d_2bef7730f28a.slice/crio-4794eecb19f00bf1a2fcebafa5492a743766bb546f5cd5b8f0f87cdd93a088c0 WatchSource:0}: Error finding container 4794eecb19f00bf1a2fcebafa5492a743766bb546f5cd5b8f0f87cdd93a088c0: Status 404 returned error can't find the container with id 4794eecb19f00bf1a2fcebafa5492a743766bb546f5cd5b8f0f87cdd93a088c0 Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.826519 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.843693 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:53 crc kubenswrapper[4679]: I0203 12:05:53.860612 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:53Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.172314 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:32:19.308028397 +0000 UTC Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.421130 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a" exitCode=0 Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.421199 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.425944 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n4gcf" event={"ID":"4a0d56c1-f4af-457d-a63d-2bef7730f28a","Type":"ContainerStarted","Data":"ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.426015 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n4gcf" event={"ID":"4a0d56c1-f4af-457d-a63d-2bef7730f28a","Type":"ContainerStarted","Data":"4794eecb19f00bf1a2fcebafa5492a743766bb546f5cd5b8f0f87cdd93a088c0"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.437161 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.455017 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.474900 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.486483 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.501489 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.519941 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.531091 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.547239 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.567444 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.582271 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.595445 4679 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.596420 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.600208 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.600257 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.600271 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.600453 4679 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.608447 4679 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.608882 4679 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.611874 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.611933 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.611944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.611963 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.611974 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.612966 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.631635 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.631732 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.636448 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.636490 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.636502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.636521 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.636532 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.648718 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.649552 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.653761 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.653912 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.653927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.653944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.653954 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.665068 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.668843 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.672165 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.672233 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.672248 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.672270 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.672285 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.679177 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.688204 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.692206 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.692247 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.692261 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.692283 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.692298 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.695588 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.705267 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: E0203 12:05:54.705450 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.707974 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.708036 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.708045 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.708066 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.708077 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.710462 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.725878 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.737506 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.750082 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.766783 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.781692 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.797630 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.810914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.810959 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.810968 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.810987 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.811004 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.817537 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.830589 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.847282 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.865956 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:54Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.913906 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.913948 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.913962 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.913982 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:54 crc kubenswrapper[4679]: I0203 12:05:54.913996 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:54Z","lastTransitionTime":"2026-02-03T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.017085 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.017131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.017141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.017159 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.017169 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.120295 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.120341 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.120368 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.120395 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.120409 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.173294 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:15:11.202265941 +0000 UTC Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.210968 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.211053 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.210968 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.211168 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.211291 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.211378 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.222793 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.222838 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.222852 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.222872 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.222885 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.325640 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.325687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.325697 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.325715 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.325773 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.430889 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.430933 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.430942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.430958 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.430967 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.435596 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.437633 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4" exitCode=0 Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.437691 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.451514 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.468019 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.478382 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.492506 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.511399 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.526499 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.535462 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.535516 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.535525 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.535543 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.535553 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.541609 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.554931 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.566123 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.576545 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.589519 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.605401 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.618458 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.636194 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:55Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.638408 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.638452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.638464 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.638484 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.638496 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.741101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.741141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.741150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.741169 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.741179 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.844509 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.844570 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.844590 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.844616 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.844630 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.935188 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.935319 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935441 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:06:03.935407778 +0000 UTC m=+36.410303866 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935502 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935573 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:03.935555953 +0000 UTC m=+36.410452041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.935597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.935637 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.935662 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935731 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935762 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:03.935753898 +0000 UTC m=+36.410649996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935891 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935909 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935924 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935968 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:03.935957804 +0000 UTC m=+36.410854162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.935963 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.936020 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.936039 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:55 crc kubenswrapper[4679]: E0203 12:05:55.936138 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:03.936110158 +0000 UTC m=+36.411006456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.948575 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.948639 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.948666 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.948692 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:55 crc kubenswrapper[4679]: I0203 12:05:55.948712 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:55Z","lastTransitionTime":"2026-02-03T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.051601 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.051645 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.051657 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.051676 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.051693 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.155108 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.155220 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.155238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.155257 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.155270 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.174521 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:53:34.875742528 +0000 UTC Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.258402 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.258761 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.258774 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.258800 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.258816 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.361831 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.361896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.361911 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.361934 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.361949 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.445027 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerStarted","Data":"cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.462425 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.464115 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.464169 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.464185 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.464204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.464217 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.481245 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.494861 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.507801 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.520451 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.534749 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.549622 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567058 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567121 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567132 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567156 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567170 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.567351 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.582596 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.599712 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.615000 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.626395 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.641480 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.661340 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.670085 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.670141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.670159 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.670179 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.670191 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.773133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.773176 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.773189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.773204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.773215 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.878735 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.878788 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.878801 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.878819 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.878831 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.982538 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.982611 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.982624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.982646 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:56 crc kubenswrapper[4679]: I0203 12:05:56.982660 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:56Z","lastTransitionTime":"2026-02-03T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.087020 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.087072 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.087084 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.087106 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.087121 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.125282 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.138334 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.153615 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.165835 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.175165 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:32:09.931161323 +0000 UTC Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.180495 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.195327 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.195426 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.195441 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.195465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.195501 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.202065 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.211402 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.211472 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:57 crc kubenswrapper[4679]: E0203 12:05:57.211554 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:57 crc kubenswrapper[4679]: E0203 12:05:57.211652 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.211487 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:57 crc kubenswrapper[4679]: E0203 12:05:57.211786 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.217726 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.232850 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.249417 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.268088 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.286078 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.298453 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.298500 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.298512 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.298532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.298544 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.308633 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.325422 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.342024 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.359703 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.401113 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.401155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.401169 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.401186 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.401197 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.452859 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.453103 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.456939 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407" exitCode=0 Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.456981 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.469978 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.483804 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.498649 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.505145 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.505198 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.505211 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.505236 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.505249 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.507396 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.518742 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.535187 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.554593 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.574060 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.591780 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.594718 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.604312 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.608716 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.608761 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.608774 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.608795 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.608809 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.615704 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.630316 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.642392 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.654959 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.665090 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.677702 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.689065 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.702201 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.710943 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.710972 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.710980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.710999 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.711012 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.713262 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.726432 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.738958 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.761741 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.776938 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.790123 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.803876 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.814718 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.814787 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.814809 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.814833 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.814851 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.819067 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.837429 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.851486 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.867817 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:57Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.917760 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.917801 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.917810 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.917824 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:57 crc kubenswrapper[4679]: I0203 12:05:57.917835 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:57Z","lastTransitionTime":"2026-02-03T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.008538 4679 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.020996 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.021043 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.021055 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.021075 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.021088 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.123918 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.123973 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.123987 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.124007 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.124018 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.176005 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:13:28.939768008 +0000 UTC Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227154 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227202 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227213 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227235 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227245 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.227755 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.247682 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.263878 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.277148 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.291946 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.317114 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.329187 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.329235 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.329252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.329274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.329287 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.331555 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.344165 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.356699 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.368115 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.379785 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.389161 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.400796 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.414347 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.431596 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.431627 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.431637 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.431654 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.431664 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.462290 4679 generic.go:334] "Generic (PLEG): container finished" podID="e65f2e85-782a-4313-b584-e3f1c9c8cf76" containerID="578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c" exitCode=0 Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.462383 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerDied","Data":"578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.462832 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.480276 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.484668 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.495714 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.509640 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.525217 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.534431 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.534641 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.534757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.534847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.534924 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.540072 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.553571 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.567404 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.587663 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.602237 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.616571 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.634671 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.641289 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.641393 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.641413 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.641442 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.641465 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.657171 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.670157 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.682605 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.699417 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.713220 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.726680 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.742311 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.744266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.744314 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.744328 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.744349 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.744379 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.756207 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.772382 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.792409 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.808001 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.821545 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.833579 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.843220 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.847167 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.847204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.847216 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.847236 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.847250 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.858300 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.870551 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.889018 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.950313 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.950388 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.950440 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.950469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:58 crc kubenswrapper[4679]: I0203 12:05:58.950482 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:58Z","lastTransitionTime":"2026-02-03T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.058080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.058133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.058143 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.058163 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.058175 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.160283 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.160319 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.160330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.160346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.160372 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.177033 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:06:42.275676967 +0000 UTC Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.210912 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:05:59 crc kubenswrapper[4679]: E0203 12:05:59.211068 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.211455 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:05:59 crc kubenswrapper[4679]: E0203 12:05:59.211507 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.211547 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:05:59 crc kubenswrapper[4679]: E0203 12:05:59.211583 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.262418 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.262464 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.262479 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.262498 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.262513 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.365086 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.365155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.365170 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.365196 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.365211 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.467499 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.467563 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.467578 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.467600 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.467614 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.477263 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" event={"ID":"e65f2e85-782a-4313-b584-e3f1c9c8cf76","Type":"ContainerStarted","Data":"61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.493661 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.512842 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.527421 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.542213 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.557737 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.569859 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.570942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.571008 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.571027 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.571050 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.571065 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.581958 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.595396 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.610616 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.625080 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.640247 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.656955 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.669885 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.674284 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.674336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.674346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.674384 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.674403 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.682875 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:05:59Z is after 2025-08-24T17:21:41Z" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.778699 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.778743 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.778754 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.778772 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.778783 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.881085 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.881143 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.881158 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.881174 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.881188 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.983679 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.983744 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.983757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.983780 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:05:59 crc kubenswrapper[4679]: I0203 12:05:59.983795 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:05:59Z","lastTransitionTime":"2026-02-03T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.086674 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.086738 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.086755 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.086777 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.086795 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.177932 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:16:36.77681407 +0000 UTC Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.189100 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.189140 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.189149 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.189166 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.189176 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.292177 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.292512 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.292623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.292812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.292856 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.394897 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.394937 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.394948 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.394965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.394979 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.482161 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/0.log" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.484699 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5" exitCode=1 Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.484767 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.486430 4679 scope.go:117] "RemoveContainer" containerID="ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.497279 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.497344 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.497379 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.497409 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.497422 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.502846 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.517413 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.530998 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.542024 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.553280 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.564714 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.578995 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.593339 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.599438 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.599468 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.599482 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.599501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.599513 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.607251 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.628575 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.650008 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.665642 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.681694 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.705170 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.705231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.705298 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.705324 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.705339 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.707540 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 12:05:59.904563 5920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.904793 5920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905141 5920 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905404 5920 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 12:05:59.905802 5920 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 12:05:59.905807 5920 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 12:05:59.905822 5920 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 12:05:59.905832 5920 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 12:05:59.905833 5920 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 12:05:59.905841 5920 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 12:05:59.905852 5920 factory.go:656] Stopping watch factory\\\\nI0203 12:05:59.905867 5920 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:00Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.808064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.808116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.808131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.808153 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.808165 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.911024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.911077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.911089 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.911111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:00 crc kubenswrapper[4679]: I0203 12:06:00.911128 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:00Z","lastTransitionTime":"2026-02-03T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.013740 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.013799 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.013816 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.013842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.013868 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.116163 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.116236 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.116252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.116271 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.116282 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.178743 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:03:50.93015389 +0000 UTC Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.211196 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.211282 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.211232 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:01 crc kubenswrapper[4679]: E0203 12:06:01.211431 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:01 crc kubenswrapper[4679]: E0203 12:06:01.211498 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:01 crc kubenswrapper[4679]: E0203 12:06:01.211610 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.218532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.218564 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.218576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.218606 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.218618 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.322077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.322135 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.322151 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.322172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.322185 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.425250 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.425297 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.425307 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.425325 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.425337 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.490872 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/1.log" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.491707 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/0.log" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.495035 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659" exitCode=1 Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.495104 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.495163 4679 scope.go:117] "RemoveContainer" containerID="ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.495900 4679 scope.go:117] "RemoveContainer" containerID="5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659" Feb 03 12:06:01 crc kubenswrapper[4679]: E0203 12:06:01.496213 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.512018 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.527753 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.527786 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.527795 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.527819 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.527833 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.529265 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.540912 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.553228 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.566865 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.584429 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 12:05:59.904563 5920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.904793 5920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905141 5920 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905404 5920 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 12:05:59.905802 5920 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 12:05:59.905807 5920 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 12:05:59.905822 5920 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 12:05:59.905832 5920 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 12:05:59.905833 5920 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 12:05:59.905841 5920 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 12:05:59.905852 5920 factory.go:656] Stopping watch factory\\\\nI0203 12:05:59.905867 5920 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.596351 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.607563 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.621115 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.630599 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.630662 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.630674 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.630694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.630707 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.633092 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.645471 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.661635 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.675965 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.689713 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:01Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.732887 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.732918 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.732927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.732944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.732954 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.835433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.835483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.835492 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.835513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.835526 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.938060 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.938091 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.938100 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.938117 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:01 crc kubenswrapper[4679]: I0203 12:06:01.938129 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:01Z","lastTransitionTime":"2026-02-03T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.040945 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.041005 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.041017 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.041040 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.041056 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.143549 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.143602 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.143614 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.143636 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.143650 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.179946 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:22:10.565927989 +0000 UTC Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.246741 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.247014 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.247144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.247246 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.247324 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.349910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.350166 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.350228 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.350297 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.350398 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.394970 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4"] Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.396006 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.398272 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.399228 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.418617 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded27af7f0de1d7e31b4f6bf21a3fe9ea2c282682034261f0779e5de2f6b07a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:00Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 12:05:59.904563 5920 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.904793 5920 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905141 5920 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 12:05:59.905404 5920 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 12:05:59.905802 5920 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 12:05:59.905807 5920 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 12:05:59.905822 5920 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 12:05:59.905832 5920 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 12:05:59.905833 5920 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 12:05:59.905841 5920 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 12:05:59.905852 5920 factory.go:656] Stopping watch factory\\\\nI0203 12:05:59.905867 5920 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.432510 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.446445 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.453559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.453781 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.453894 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.453980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.454053 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.461098 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.483452 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.500591 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.501337 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/1.log" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.506527 4679 scope.go:117] "RemoveContainer" containerID="5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659" Feb 03 12:06:02 crc kubenswrapper[4679]: E0203 12:06:02.506737 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.513762 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.518329 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qrn\" (UniqueName: \"kubernetes.io/projected/eb38298a-164d-4175-9d84-e9f199da55ca-kube-api-access-l9qrn\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.518432 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.518476 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.518501 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb38298a-164d-4175-9d84-e9f199da55ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.528203 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.541604 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557128 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557668 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557678 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557699 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.557719 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.569659 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.584754 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.600298 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.615398 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.620192 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qrn\" (UniqueName: \"kubernetes.io/projected/eb38298a-164d-4175-9d84-e9f199da55ca-kube-api-access-l9qrn\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.620324 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.620427 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.620470 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb38298a-164d-4175-9d84-e9f199da55ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.621499 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.621503 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb38298a-164d-4175-9d84-e9f199da55ca-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.628556 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb38298a-164d-4175-9d84-e9f199da55ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.636752 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.636994 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qrn\" (UniqueName: \"kubernetes.io/projected/eb38298a-164d-4175-9d84-e9f199da55ca-kube-api-access-l9qrn\") pod \"ovnkube-control-plane-749d76644c-tgmp4\" (UID: \"eb38298a-164d-4175-9d84-e9f199da55ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.651231 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.659737 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.660084 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.660185 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.660307 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.660434 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.662883 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.677880 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.689771 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.705328 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.710721 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.722412 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: W0203 12:06:02.725311 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb38298a_164d_4175_9d84_e9f199da55ca.slice/crio-a9d761ee54153713b74f47a5409402f212a6d4988283bbbecf899b4bff330f6f WatchSource:0}: Error finding container a9d761ee54153713b74f47a5409402f212a6d4988283bbbecf899b4bff330f6f: Status 404 returned error can't find the container with id a9d761ee54153713b74f47a5409402f212a6d4988283bbbecf899b4bff330f6f Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.738395 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.752710 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.763462 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.763504 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.763516 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.763535 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.763547 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.765450 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.779110 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.793063 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.809032 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.831427 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.846350 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.859736 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:02Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.866870 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.866927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.866940 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.866960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.866974 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.970036 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.970095 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.970107 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.970127 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:02 crc kubenswrapper[4679]: I0203 12:06:02.970138 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:02Z","lastTransitionTime":"2026-02-03T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.072298 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.072337 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.072353 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.072396 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.072409 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.175289 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.175328 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.175338 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.175368 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.175379 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.181512 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:29:09.931310876 +0000 UTC Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.211723 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:03 crc kubenswrapper[4679]: E0203 12:06:03.211877 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.211946 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:03 crc kubenswrapper[4679]: E0203 12:06:03.211986 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.212024 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:03 crc kubenswrapper[4679]: E0203 12:06:03.212059 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.277519 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.277564 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.277576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.277594 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.277611 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.380420 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.380477 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.380505 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.380534 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.380551 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.483290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.483346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.483383 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.483414 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.483429 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.509955 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" event={"ID":"eb38298a-164d-4175-9d84-e9f199da55ca","Type":"ContainerStarted","Data":"eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.510011 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" event={"ID":"eb38298a-164d-4175-9d84-e9f199da55ca","Type":"ContainerStarted","Data":"ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.510022 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" event={"ID":"eb38298a-164d-4175-9d84-e9f199da55ca","Type":"ContainerStarted","Data":"a9d761ee54153713b74f47a5409402f212a6d4988283bbbecf899b4bff330f6f"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.525025 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.542924 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.555809 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.568011 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.581055 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.586931 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.586982 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.586995 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.587016 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.587032 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.592724 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.605659 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.619557 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.635375 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.647644 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.661323 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.674245 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.685406 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.689378 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.689415 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.689428 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.689446 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.689519 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.702280 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.723286 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:03Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.792807 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.792853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.792861 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.792880 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.792891 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.895200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.895257 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.895271 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.895293 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.895306 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.998221 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.998273 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.998284 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.998304 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:03 crc kubenswrapper[4679]: I0203 12:06:03.998317 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:03Z","lastTransitionTime":"2026-02-03T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.035579 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.035737 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:06:20.035703068 +0000 UTC m=+52.510599156 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.035773 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.035808 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.035839 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.035867 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.035938 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.035983 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036003 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.035996 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:20.035984856 +0000 UTC m=+52.510880944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036023 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036034 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:20.036024997 +0000 UTC m=+52.510921095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036038 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036070 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:20.036060448 +0000 UTC m=+52.510956536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036618 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036656 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036668 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.036714 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:20.036700946 +0000 UTC m=+52.511597034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.101857 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.101915 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.101931 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.101953 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.101965 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.181947 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:23:34.721095487 +0000 UTC Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.205019 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.205055 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.205064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.205083 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.205097 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.209622 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-j8bgc"] Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.210173 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.210262 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.225323 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.238225 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.238304 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/ba5e4da3-455d-4394-824c-2dfe080bc2c5-kube-api-access-wr7lv\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.239566 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.251633 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.265810 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.284734 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.296299 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.307400 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.307452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.307465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.307483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.307495 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.311046 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.324768 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.337450 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.340820 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.341098 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/ba5e4da3-455d-4394-824c-2dfe080bc2c5-kube-api-access-wr7lv\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.341271 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.341380 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:04.841344535 +0000 UTC m=+37.316240623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.356135 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.357211 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7lv\" (UniqueName: \"kubernetes.io/projected/ba5e4da3-455d-4394-824c-2dfe080bc2c5-kube-api-access-wr7lv\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.368564 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.379257 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.391147 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.403873 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.409638 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.409694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.409708 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.409726 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.409739 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.418962 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.432602 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.513054 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.513680 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.513724 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.513748 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.513763 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.617048 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.617312 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.617404 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.617487 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.617665 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.720710 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.720961 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.721064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.721164 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.721273 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.824238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.824516 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.824605 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.824719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.824787 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.847075 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.847247 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.847326 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:05.847306151 +0000 UTC m=+38.322202239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.926797 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.926877 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.926894 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.926921 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.926935 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.963109 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.963155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.963166 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.963183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.963194 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.976030 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.980566 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.980615 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.980630 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.980652 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.980662 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:04 crc kubenswrapper[4679]: E0203 12:06:04.993481 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:04Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.997725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.997791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.997803 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.997825 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:04 crc kubenswrapper[4679]: I0203 12:06:04.997837 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:04Z","lastTransitionTime":"2026-02-03T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.012030 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.015626 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.015672 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.015684 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.015704 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.015715 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.028831 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.032204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.032241 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.032249 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.032264 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.032275 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.044782 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:05Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.044903 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.047002 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.047059 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.047072 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.047091 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.047105 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.149885 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.149938 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.149947 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.149966 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.149980 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.183073 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:58:51.263090142 +0000 UTC Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.211510 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.211598 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.211635 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.211688 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.211770 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.211857 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.254847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.254910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.254925 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.254949 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.254967 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.357428 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.357491 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.357501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.357519 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.357529 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.460612 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.460670 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.460683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.460705 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.460720 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.563475 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.563518 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.563552 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.563571 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.563581 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.666029 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.666142 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.666228 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.666295 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.666313 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.769511 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.769566 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.769577 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.769598 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.769609 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.861598 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.861780 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:05 crc kubenswrapper[4679]: E0203 12:06:05.861856 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:07.861830699 +0000 UTC m=+40.336726787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.872293 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.872333 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.872342 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.872372 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.872388 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.975043 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.975101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.975113 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.975137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:05 crc kubenswrapper[4679]: I0203 12:06:05.975152 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:05Z","lastTransitionTime":"2026-02-03T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.077936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.078242 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.078254 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.078274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.078990 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.181254 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.181309 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.181319 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.181339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.181351 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.183414 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:46:46.918946286 +0000 UTC Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.210868 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:06 crc kubenswrapper[4679]: E0203 12:06:06.211055 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.284766 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.284802 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.284813 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.284832 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.284843 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.387550 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.387614 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.387636 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.387662 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.387680 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.490466 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.490532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.490544 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.490567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.490580 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.593172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.593231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.593244 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.593264 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.593277 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.696571 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.696621 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.696631 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.696652 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.696663 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.799759 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.799809 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.799820 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.799839 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.799854 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.902415 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.902471 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.902490 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.902512 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:06 crc kubenswrapper[4679]: I0203 12:06:06.902524 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:06Z","lastTransitionTime":"2026-02-03T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.005259 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.005314 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.005325 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.005380 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.005392 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.108112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.108159 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.108171 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.108188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.108199 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.184456 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:54:22.11494807 +0000 UTC Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210398 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210442 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210460 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210470 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210704 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210744 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:07 crc kubenswrapper[4679]: E0203 12:06:07.210802 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.210836 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:07 crc kubenswrapper[4679]: E0203 12:06:07.210958 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:07 crc kubenswrapper[4679]: E0203 12:06:07.211020 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.313184 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.313226 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.313239 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.313256 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.313266 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.415809 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.415861 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.415875 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.415896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.415910 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.518379 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.518425 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.518435 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.518454 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.518466 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.620837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.620897 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.620910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.620927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.620938 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.723226 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.723282 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.723308 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.723332 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.723347 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.827229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.827277 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.827290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.827313 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.827327 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.886134 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:07 crc kubenswrapper[4679]: E0203 12:06:07.886322 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:07 crc kubenswrapper[4679]: E0203 12:06:07.886418 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:11.886398627 +0000 UTC m=+44.361294715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.929646 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.929720 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.929735 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.929755 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:07 crc kubenswrapper[4679]: I0203 12:06:07.929767 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:07Z","lastTransitionTime":"2026-02-03T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.032843 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.033006 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.033333 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.033486 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.033524 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.136498 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.136542 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.136557 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.136578 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.136589 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.185607 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:35:37.12137384 +0000 UTC Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.211258 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:08 crc kubenswrapper[4679]: E0203 12:06:08.211459 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.225082 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.239712 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.239750 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.239759 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.239778 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.239790 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.241291 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.257153 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.270821 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.281406 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.293992 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.310584 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.326260 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.340058 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.342424 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.342465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.342479 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.342502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.342520 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.351542 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.362728 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.376413 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.391207 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.406480 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.427998 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.444792 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.444844 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.444878 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.444895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.444905 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.450098 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:08Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.547936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.547980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.547993 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.548010 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.548023 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.651416 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.651472 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.651481 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.651502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.651512 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.754395 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.754454 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.754465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.754483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.754496 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.858274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.858340 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.858396 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.858425 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.858441 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.961189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.961247 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.961258 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.961277 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:08 crc kubenswrapper[4679]: I0203 12:06:08.961582 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:08Z","lastTransitionTime":"2026-02-03T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.065124 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.065188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.065197 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.065217 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.065228 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.167846 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.167908 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.167920 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.167943 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.167963 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.186425 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:34:46.526997826 +0000 UTC Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.211102 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.211161 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.211247 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:09 crc kubenswrapper[4679]: E0203 12:06:09.211309 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:09 crc kubenswrapper[4679]: E0203 12:06:09.211467 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:09 crc kubenswrapper[4679]: E0203 12:06:09.211618 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.271666 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.271710 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.271725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.271745 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.271756 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.375183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.375254 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.375282 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.375305 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.375345 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.477616 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.477655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.477665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.477681 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.477692 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.580549 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.580604 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.580612 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.580629 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.580678 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.683782 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.683837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.683847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.683863 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.683879 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.791901 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.791952 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.791962 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.791981 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.791992 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.895916 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.895982 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.895994 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.896018 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.896031 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.999025 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.999101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.999125 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.999160 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:09 crc kubenswrapper[4679]: I0203 12:06:09.999186 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:09Z","lastTransitionTime":"2026-02-03T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.102553 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.102644 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.102666 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.102703 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.102735 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.186599 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:08:33.305489375 +0000 UTC Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.206030 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.206116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.206142 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.206188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.206202 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.211566 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:10 crc kubenswrapper[4679]: E0203 12:06:10.211758 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.309055 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.309104 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.309116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.309133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.309143 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.412041 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.412084 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.412093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.412108 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.412138 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.515949 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.516012 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.516029 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.516051 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.516067 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.618822 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.618866 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.618878 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.618896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.618909 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.722481 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.722555 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.722573 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.722597 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.722617 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.825048 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.825100 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.825111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.825130 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.825142 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.928463 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.928546 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.928566 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.928593 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:10 crc kubenswrapper[4679]: I0203 12:06:10.928612 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:10Z","lastTransitionTime":"2026-02-03T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.030896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.030947 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.030959 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.030977 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.030990 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.134601 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.134664 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.134676 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.134697 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.134711 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.187748 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:25:14.864224121 +0000 UTC Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.211190 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.211203 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.211222 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:11 crc kubenswrapper[4679]: E0203 12:06:11.211649 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:11 crc kubenswrapper[4679]: E0203 12:06:11.211643 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:11 crc kubenswrapper[4679]: E0203 12:06:11.211411 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.237850 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.237894 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.237903 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.237919 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.237929 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.341329 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.341397 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.341411 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.341429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.341449 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.444572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.444643 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.444655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.444676 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.444689 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.546921 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.546981 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.546996 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.547015 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.547027 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.649783 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.649838 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.649847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.649867 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.649877 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.752835 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.752884 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.752895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.752914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.752928 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.856215 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.856278 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.856287 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.856303 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.856313 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.932106 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:11 crc kubenswrapper[4679]: E0203 12:06:11.932301 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:11 crc kubenswrapper[4679]: E0203 12:06:11.932462 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:19.932441473 +0000 UTC m=+52.407337561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.959200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.959244 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.959253 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.959273 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:11 crc kubenswrapper[4679]: I0203 12:06:11.959286 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:11Z","lastTransitionTime":"2026-02-03T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.062531 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.062590 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.062602 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.062623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.062636 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.165077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.165132 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.165141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.165160 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.165184 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.188521 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:26:47.116606848 +0000 UTC Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.211002 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:12 crc kubenswrapper[4679]: E0203 12:06:12.211172 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.268573 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.268649 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.268665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.268691 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.268703 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.371497 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.371535 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.371551 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.371572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.371585 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.474047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.474086 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.474102 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.474119 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.474132 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.576347 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.576388 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.576396 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.576411 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.576420 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.679328 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.679405 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.679419 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.679437 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.679447 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.781793 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.781845 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.781855 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.781870 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.781879 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.884276 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.884330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.884342 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.884377 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.884392 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.987741 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.987790 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.987799 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.987818 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:12 crc kubenswrapper[4679]: I0203 12:06:12.987836 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:12Z","lastTransitionTime":"2026-02-03T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.090525 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.090802 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.090923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.091044 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.091124 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.189428 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:39:51.829724008 +0000 UTC Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.193922 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.194169 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.194200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.194220 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.194233 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.211311 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.211433 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.211498 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:13 crc kubenswrapper[4679]: E0203 12:06:13.211539 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:13 crc kubenswrapper[4679]: E0203 12:06:13.211704 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:13 crc kubenswrapper[4679]: E0203 12:06:13.211894 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.297442 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.297492 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.297502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.297524 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.297537 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.400795 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.400856 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.400867 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.400893 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.400905 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.503798 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.503850 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.503861 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.503880 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.503891 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.606053 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.606096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.606105 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.606128 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.606148 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.709608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.709672 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.709687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.709711 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.709731 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.812583 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.812638 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.812648 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.812667 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.812681 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.915061 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.915395 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.915563 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.915651 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:13 crc kubenswrapper[4679]: I0203 12:06:13.915718 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:13Z","lastTransitionTime":"2026-02-03T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.017909 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.017948 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.017957 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.017973 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.017983 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.120853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.120898 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.120907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.120927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.120937 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.190398 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:20:50.187405984 +0000 UTC Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.211831 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:14 crc kubenswrapper[4679]: E0203 12:06:14.212504 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.212721 4679 scope.go:117] "RemoveContainer" containerID="5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.224017 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.224064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.224075 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.224093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.224104 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.326661 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.326714 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.326726 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.326748 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.326759 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.429294 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.429338 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.429350 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.429389 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.429403 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.531977 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.532036 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.532048 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.532067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.532088 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.550338 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/1.log" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.553311 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.553853 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.570465 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.583726 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.600324 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.635020 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.635907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.635966 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.635979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.636003 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.636018 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.650634 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.666339 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.687624 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.706604 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.723958 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.739296 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.739352 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.739371 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.739408 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.739425 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.743940 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.812424 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.832212 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.841853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.841885 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.841896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.841914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.841927 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.853019 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.868843 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.881191 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.894926 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:14Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.944489 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.944719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.944846 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.944949 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:14 crc kubenswrapper[4679]: I0203 12:06:14.945035 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:14Z","lastTransitionTime":"2026-02-03T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.048256 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.048311 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.048324 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.048345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.048359 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.098335 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.098413 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.098429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.098452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.098465 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.111331 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.115384 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.115442 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.115452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.115469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.115480 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.127911 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.131523 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.131561 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.131580 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.131603 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.131616 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.142994 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.148642 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.148684 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.148696 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.148717 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.148733 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.161895 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.165744 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.165796 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.165817 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.165841 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.165854 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.179361 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.179569 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.181582 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.181615 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.181624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.181642 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.181653 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.191068 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:46:06.501084856 +0000 UTC Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.211741 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.211797 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.211925 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.212124 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.212236 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.212317 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.284641 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.284694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.284707 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.284725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.284738 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.387461 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.387510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.387520 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.387538 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.387548 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.490321 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.490392 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.490410 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.490433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.490446 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.559542 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/2.log" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.560363 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/1.log" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.564168 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" exitCode=1 Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.564207 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.564283 4679 scope.go:117] "RemoveContainer" containerID="5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.564952 4679 scope.go:117] "RemoveContainer" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" Feb 03 12:06:15 crc kubenswrapper[4679]: E0203 12:06:15.565139 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.583250 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.592871 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.592940 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.592957 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.592979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.593182 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.602084 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.617095 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.631763 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.644020 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.658433 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.672069 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.687061 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.695512 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.695548 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.695559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.695575 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.695585 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.701173 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.712871 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.725295 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.737304 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.752360 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.772299 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f13e24e48963110fb1670ee55fd93f06b1fc79f894e3986bd90852eb1b5d659\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:01Z\\\",\\\"message\\\":\\\"-24T17:21:41Z]\\\\nI0203 12:06:01.274296 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0203 12:06:01.274304 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274303 6085 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dz6f8 after 0 failed attempt(s)\\\\nI0203 12:06:01.274315 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-8qvcg\\\\nI0203 12:06:01.274314 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274322 6085 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dz6f8\\\\nI0203 12:06:01.274326 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:01.274288 6085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:01.274333 6085 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0203 12:06:01.274374 6085 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0203 12:06:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.784518 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.798049 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.798100 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.798111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.798135 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.798153 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.800085 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.900923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.900982 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.901001 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.901024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:15 crc kubenswrapper[4679]: I0203 12:06:15.901037 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:15Z","lastTransitionTime":"2026-02-03T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.004480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.004540 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.004553 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.004580 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.004595 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.108288 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.108339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.108348 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.108364 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.108377 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.191546 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:28:34.825502141 +0000 UTC Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.210838 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:16 crc kubenswrapper[4679]: E0203 12:06:16.211018 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.211522 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.211548 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.211558 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.211574 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.211586 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.314672 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.314730 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.314746 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.314769 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.314787 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.418150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.418200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.418211 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.418229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.418241 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.521533 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.521583 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.521596 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.521618 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.521632 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.570604 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/2.log" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.575417 4679 scope.go:117] "RemoveContainer" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" Feb 03 12:06:16 crc kubenswrapper[4679]: E0203 12:06:16.575598 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.590504 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.604492 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.616919 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.623622 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.623663 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.623675 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.623694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.623706 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.635310 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.654155 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.665119 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.678974 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.692246 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.704890 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.715694 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.726078 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.726116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.726127 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.726147 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.726161 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.727482 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.740476 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.758084 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.772430 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.786191 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.798669 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:16Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.828763 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.828812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.828826 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.828848 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.828861 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.932842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.932906 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.932919 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.932942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.932957 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:16Z","lastTransitionTime":"2026-02-03T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:16 crc kubenswrapper[4679]: I0203 12:06:16.994473 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.003775 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.010506 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.026458 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.035363 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.035433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.035446 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.035469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.035485 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.043979 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.059128 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.078618 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.096970 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.108271 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.123252 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.134754 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.138901 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.138945 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.138954 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.138972 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.138984 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.147889 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.158039 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.169864 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.182730 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.191906 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:26:48.325398723 +0000 UTC Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.196533 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.208590 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.210986 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.211087 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.210986 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:17 crc kubenswrapper[4679]: E0203 12:06:17.211102 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:17 crc kubenswrapper[4679]: E0203 12:06:17.211353 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:17 crc kubenswrapper[4679]: E0203 12:06:17.211511 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.220768 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:17Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.242161 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.242216 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.242226 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.242248 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.242260 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.345501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.345559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.345570 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.345589 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.345601 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.448547 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.448600 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.448612 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.448634 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.448647 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.551266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.551312 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.551321 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.551339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.551350 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.654031 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.654077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.654088 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.654109 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.654127 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.756567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.756621 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.756637 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.756656 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.756672 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.859606 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.859667 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.859680 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.859702 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.859716 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.962939 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.963023 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.963050 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.963076 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:17 crc kubenswrapper[4679]: I0203 12:06:17.963094 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:17Z","lastTransitionTime":"2026-02-03T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.066620 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.066665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.066677 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.066695 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.066709 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.169046 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.169112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.169124 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.169141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.169152 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.192806 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:31:13.301789205 +0000 UTC Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.211908 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:18 crc kubenswrapper[4679]: E0203 12:06:18.212133 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.226293 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.240998 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.253622 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.269341 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.271706 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.271751 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.271767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.271790 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.271806 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.283069 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.299653 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.319428 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.331602 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.347222 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.361837 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.375157 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.375197 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.375209 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.375228 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.375242 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.378460 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.391479 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.406781 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.420188 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.436908 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.451103 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.463786 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:18Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.477823 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.477876 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.477889 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.477906 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.477917 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.580460 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.580501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.580510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.580527 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.580540 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.683556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.683617 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.683628 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.683651 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.683665 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.786825 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.786909 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.786934 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.786965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.786985 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.890193 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.890259 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.890273 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.890311 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.890326 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.994121 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.994188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.994200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.994222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:18 crc kubenswrapper[4679]: I0203 12:06:18.994237 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:18Z","lastTransitionTime":"2026-02-03T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.097062 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.097131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.097143 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.097164 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.097189 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.193041 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:47:25.251938071 +0000 UTC Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.200294 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.200351 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.200384 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.200406 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.200419 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.211654 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.211654 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:19 crc kubenswrapper[4679]: E0203 12:06:19.211805 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.211665 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:19 crc kubenswrapper[4679]: E0203 12:06:19.211926 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:19 crc kubenswrapper[4679]: E0203 12:06:19.211941 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.302799 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.302843 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.302852 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.302869 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.302891 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.405940 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.405975 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.405983 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.406000 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.406011 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.508391 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.508476 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.508488 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.508506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.508516 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.610725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.610779 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.610792 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.610813 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.610830 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.713265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.713316 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.713325 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.713342 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.713353 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.816483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.816520 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.816528 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.816544 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.816554 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.919850 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.919894 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.919907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.919924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.919934 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:19Z","lastTransitionTime":"2026-02-03T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:19 crc kubenswrapper[4679]: I0203 12:06:19.962051 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:19 crc kubenswrapper[4679]: E0203 12:06:19.962232 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:19 crc kubenswrapper[4679]: E0203 12:06:19.962304 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:35.962285054 +0000 UTC m=+68.437181142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.023261 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.023311 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.023324 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.023348 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.023386 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.063395 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.063528 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063625 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:06:52.063591327 +0000 UTC m=+84.538487415 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063704 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063728 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063743 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.063813 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.063878 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063891 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:52.063881885 +0000 UTC m=+84.538777973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063939 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063961 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063975 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.063989 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.064029 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.064035 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:52.064017259 +0000 UTC m=+84.538913337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.063967 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.064063 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:52.06404458 +0000 UTC m=+84.538940668 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.064097 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:06:52.064073861 +0000 UTC m=+84.538969949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.126538 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.126598 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.126635 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.126658 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.126669 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.194037 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:20:06.997642642 +0000 UTC Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.211611 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:20 crc kubenswrapper[4679]: E0203 12:06:20.211801 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.229683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.229729 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.229738 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.229756 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.229766 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.332177 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.332232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.332245 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.332266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.332590 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.436245 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.436285 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.436297 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.436314 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.436327 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.539010 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.539065 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.539077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.539094 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.539105 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.641965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.642021 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.642041 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.642067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.642085 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.745925 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.745989 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.746006 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.746026 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.746038 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.849183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.849248 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.849262 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.849285 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.849299 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.952585 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.952640 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.952656 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.952677 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:20 crc kubenswrapper[4679]: I0203 12:06:20.952692 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:20Z","lastTransitionTime":"2026-02-03T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.055335 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.055410 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.055426 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.055446 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.055461 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.158318 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.158467 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.158487 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.158510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.158531 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.194525 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:16:09.930410511 +0000 UTC Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.210956 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.211075 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.211122 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:21 crc kubenswrapper[4679]: E0203 12:06:21.211284 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:21 crc kubenswrapper[4679]: E0203 12:06:21.211419 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:21 crc kubenswrapper[4679]: E0203 12:06:21.211544 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.261460 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.261523 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.261533 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.261556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.261569 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.364052 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.364106 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.364117 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.364135 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.364145 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.467759 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.467833 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.467847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.467868 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.467881 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.570480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.570528 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.570540 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.570559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.570574 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.672955 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.673004 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.673012 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.673059 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.673069 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.775630 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.775678 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.775687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.775706 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.775717 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.878332 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.878411 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.878427 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.878447 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.878463 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.983070 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.983127 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.983137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.983154 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:21 crc kubenswrapper[4679]: I0203 12:06:21.983166 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:21Z","lastTransitionTime":"2026-02-03T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.086618 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.086683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.086695 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.086719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.086730 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.189191 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.189238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.189247 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.189265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.189277 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.195226 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:38:57.581694866 +0000 UTC Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.211904 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:22 crc kubenswrapper[4679]: E0203 12:06:22.212285 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.292509 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.292580 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.292603 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.292634 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.292659 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.395774 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.395831 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.395855 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.395885 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.395908 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.498458 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.498526 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.498539 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.498562 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.498583 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.600555 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.600608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.600635 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.600683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.600698 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.717434 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.717486 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.717495 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.717513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.717523 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.821063 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.821121 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.821138 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.821163 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.821178 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.924345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.924466 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.924485 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.924513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:22 crc kubenswrapper[4679]: I0203 12:06:22.924530 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:22Z","lastTransitionTime":"2026-02-03T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.027683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.027733 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.027747 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.027767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.027778 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.130230 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.130339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.130382 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.130410 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.130426 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.195740 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:05:52.032377849 +0000 UTC Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.211202 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.211254 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:23 crc kubenswrapper[4679]: E0203 12:06:23.211347 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.211254 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:23 crc kubenswrapper[4679]: E0203 12:06:23.211453 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:23 crc kubenswrapper[4679]: E0203 12:06:23.211560 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.233330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.233394 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.233407 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.233427 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.233444 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.335665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.335722 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.335734 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.335755 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.335767 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.438450 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.438493 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.438502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.438517 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.438526 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.541295 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.541343 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.541352 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.541392 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.541408 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.643359 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.643412 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.643420 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.643437 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.643447 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.746557 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.746610 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.746623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.746642 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.746656 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.849289 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.849356 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.849386 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.849407 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.849421 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.952590 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.952659 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.952670 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.952689 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:23 crc kubenswrapper[4679]: I0203 12:06:23.952704 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:23Z","lastTransitionTime":"2026-02-03T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.056332 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.056608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.056622 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.056644 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.056660 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.159772 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.159815 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.159824 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.159841 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.159851 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.196140 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:07:41.222319821 +0000 UTC Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.210774 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:24 crc kubenswrapper[4679]: E0203 12:06:24.210927 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.262480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.262541 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.262556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.262584 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.262603 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.366035 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.366102 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.366116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.366133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.366146 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.468723 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.468767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.468779 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.468796 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.468808 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.572470 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.572522 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.572536 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.572564 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.572581 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.676421 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.676484 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.676496 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.676517 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.676531 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.780281 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.780415 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.780452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.780484 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.780507 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.884162 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.884224 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.884237 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.884263 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.884277 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.987727 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.987804 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.987820 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.987844 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:24 crc kubenswrapper[4679]: I0203 12:06:24.987864 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:24Z","lastTransitionTime":"2026-02-03T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.092174 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.092223 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.092234 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.092250 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.092263 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.194907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.194957 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.194970 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.194989 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.195004 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.197291 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:17:58.054115972 +0000 UTC Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.211137 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.211218 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.211217 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.211340 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.211526 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.211786 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.297805 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.297856 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.297866 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.297888 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.297903 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.369056 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.369087 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.369096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.369112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.369121 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.388049 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:25Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.392615 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.392645 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.392655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.392674 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.392687 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.405035 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:25Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.408909 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.408943 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.408954 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.408973 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.408987 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.420093 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:25Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.423114 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.423152 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.423164 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.423183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.423195 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.434345 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:25Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.437721 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.437757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.437769 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.437789 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.437799 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.448562 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:25Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:25 crc kubenswrapper[4679]: E0203 12:06:25.448692 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.450463 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.450507 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.450521 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.450541 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.450555 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.553527 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.553572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.553580 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.553600 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.553611 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.656295 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.656415 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.656433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.656454 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.656466 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.759483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.759617 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.759642 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.759660 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.759672 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.861749 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.861813 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.861825 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.861853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.861865 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.965238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.965309 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.965323 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.965345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:25 crc kubenswrapper[4679]: I0203 12:06:25.965377 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:25Z","lastTransitionTime":"2026-02-03T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.068969 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.069067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.069094 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.069173 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.069196 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.172830 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.172889 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.172903 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.172922 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.172933 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.198430 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:34:57.337426807 +0000 UTC Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.211418 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:26 crc kubenswrapper[4679]: E0203 12:06:26.211629 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.275959 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.276010 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.276022 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.276043 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.276055 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.379305 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.379397 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.379406 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.379425 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.379436 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.481825 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.481884 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.481895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.481918 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.481930 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.584872 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.584915 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.584926 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.584944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.584955 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.687500 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.687610 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.687626 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.687653 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.687668 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.790626 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.790661 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.790671 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.790691 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.790701 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.893372 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.893418 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.893429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.893449 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.893462 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.996006 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.996096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.996108 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.996127 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:26 crc kubenswrapper[4679]: I0203 12:06:26.996138 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:26Z","lastTransitionTime":"2026-02-03T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.098903 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.098954 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.098967 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.098991 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.099006 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.199008 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:20:14.062830192 +0000 UTC Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.202143 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.202179 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.202189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.202208 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.202221 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.211337 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.211448 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:27 crc kubenswrapper[4679]: E0203 12:06:27.211544 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.211583 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:27 crc kubenswrapper[4679]: E0203 12:06:27.211619 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:27 crc kubenswrapper[4679]: E0203 12:06:27.211652 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.305171 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.305219 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.305233 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.305255 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.305268 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.408140 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.408194 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.408205 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.408226 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.408238 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.511037 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.511096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.511113 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.511134 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.511149 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.613123 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.613170 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.613182 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.613199 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.613213 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.716572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.716620 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.716637 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.716653 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.716667 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.819984 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.820033 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.820044 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.820061 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.820072 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.923195 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.923256 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.923267 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.923286 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:27 crc kubenswrapper[4679]: I0203 12:06:27.923298 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:27Z","lastTransitionTime":"2026-02-03T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.026178 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.026232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.026246 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.026264 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.026283 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.129401 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.129473 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.129491 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.129520 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.129538 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.199773 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:08:07.956830683 +0000 UTC Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.211943 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:28 crc kubenswrapper[4679]: E0203 12:06:28.212117 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.226436 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.232267 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.232298 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.232307 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.232322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.232333 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.239520 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.251955 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.265690 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.280417 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.296984 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.315580 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.335232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.335294 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.335309 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.335331 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.335346 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.339497 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.353807 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.368295 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.384867 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.402095 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.415751 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.432034 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.437822 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.437893 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.437909 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.437930 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.437942 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.444714 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.457633 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.470909 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:28Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.540230 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.540277 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.540289 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.540311 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.540324 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.643605 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.643655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.643665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.643684 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.643696 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.746725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.746791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.746804 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.746910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.746928 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.850011 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.850057 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.850067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.850081 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.850092 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.953143 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.953209 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.953219 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.953236 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:28 crc kubenswrapper[4679]: I0203 12:06:28.953249 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:28Z","lastTransitionTime":"2026-02-03T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.055901 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.055991 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.056019 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.056056 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.056082 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.159825 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.159892 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.159910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.159935 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.159953 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.200718 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:23:36.772005944 +0000 UTC Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.211220 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.211306 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:29 crc kubenswrapper[4679]: E0203 12:06:29.211468 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.211488 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:29 crc kubenswrapper[4679]: E0203 12:06:29.211645 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:29 crc kubenswrapper[4679]: E0203 12:06:29.211842 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.262907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.262960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.262971 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.262992 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.263005 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.366094 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.366131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.366140 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.366155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.366164 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.469337 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.469421 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.469434 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.469455 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.469472 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.572113 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.572172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.572183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.572200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.572211 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.675306 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.675347 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.675371 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.675391 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.675404 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.777985 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.778023 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.778031 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.778047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.778059 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.881026 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.881106 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.881116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.881134 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.881147 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.984473 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.984541 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.984569 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.984600 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:29 crc kubenswrapper[4679]: I0203 12:06:29.984616 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:29Z","lastTransitionTime":"2026-02-03T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.087783 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.087819 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.087830 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.087847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.087858 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.190924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.190967 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.190980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.191002 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.191016 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.201441 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:12:30.493872092 +0000 UTC Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.210862 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:30 crc kubenswrapper[4679]: E0203 12:06:30.211049 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.294157 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.294216 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.294229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.294249 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.294261 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.397580 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.397625 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.397656 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.397674 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.397685 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.499968 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.500013 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.500028 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.500047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.500058 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.602543 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.602583 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.602594 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.602611 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.602623 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.705671 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.705703 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.705721 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.705743 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.705753 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.809067 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.809128 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.809142 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.809545 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.809568 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.912510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.912544 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.912556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.912574 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:30 crc kubenswrapper[4679]: I0203 12:06:30.912586 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:30Z","lastTransitionTime":"2026-02-03T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.015440 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.015512 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.015529 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.015553 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.015565 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.118746 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.118824 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.118837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.118858 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.118873 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.202304 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:37:38.325219833 +0000 UTC Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.211699 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.211973 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.212002 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:31 crc kubenswrapper[4679]: E0203 12:06:31.212056 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:31 crc kubenswrapper[4679]: E0203 12:06:31.211924 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:31 crc kubenswrapper[4679]: E0203 12:06:31.212194 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.213079 4679 scope.go:117] "RemoveContainer" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" Feb 03 12:06:31 crc kubenswrapper[4679]: E0203 12:06:31.213477 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.221976 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.222010 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.222020 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.222034 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.222042 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.324702 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.324752 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.324768 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.324791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.324804 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.429492 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.429573 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.429587 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.429609 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.429626 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.532089 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.532123 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.532131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.532144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.532153 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.634895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.634931 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.634944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.634963 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.634976 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.738139 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.738191 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.738204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.738222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.738236 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.840749 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.840834 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.840847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.840863 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.840875 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.943547 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.943597 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.943608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.943624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:31 crc kubenswrapper[4679]: I0203 12:06:31.943639 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:31Z","lastTransitionTime":"2026-02-03T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.047270 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.047337 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.047350 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.047418 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.047439 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.150585 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.150638 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.150652 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.150673 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.150690 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.203434 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:46:22.858951266 +0000 UTC Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.211078 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:32 crc kubenswrapper[4679]: E0203 12:06:32.211313 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.253550 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.253596 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.253607 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.253623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.253659 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.356272 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.356330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.356342 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.356404 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.356420 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.459792 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.459860 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.459881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.459907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.459924 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.563268 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.563308 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.563317 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.563336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.563347 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.666253 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.666303 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.666315 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.666332 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.666344 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.769062 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.769116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.769133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.769150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.769163 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.871942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.871992 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.872004 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.872019 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.872029 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.974559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.974605 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.974614 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.974629 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:32 crc kubenswrapper[4679]: I0203 12:06:32.974640 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:32Z","lastTransitionTime":"2026-02-03T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.077052 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.077122 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.077134 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.077155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.077168 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.179988 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.180302 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.180430 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.180566 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.180674 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.203622 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:39:01.589259532 +0000 UTC Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.211056 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.211119 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.211205 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:33 crc kubenswrapper[4679]: E0203 12:06:33.211570 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:33 crc kubenswrapper[4679]: E0203 12:06:33.211666 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:33 crc kubenswrapper[4679]: E0203 12:06:33.211789 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.225763 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.283624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.283666 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.283677 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.283696 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.283708 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.386267 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.386314 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.386326 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.386345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.386380 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.488719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.488967 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.488979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.488998 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.489015 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.592059 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.592104 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.592112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.592129 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.592140 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.694644 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.694692 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.694704 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.694722 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.694731 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.797730 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.797806 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.797815 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.797833 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.797843 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.901269 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.901323 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.901332 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.901352 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:33 crc kubenswrapper[4679]: I0203 12:06:33.901624 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:33Z","lastTransitionTime":"2026-02-03T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.004232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.004290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.004301 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.004322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.004333 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.107240 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.107659 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.107760 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.107854 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.107959 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.204404 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:00:43.387229958 +0000 UTC Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210692 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210705 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210833 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: E0203 12:06:34.210831 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210845 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210879 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.210892 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.313457 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.313692 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.313837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.313951 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.314078 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.417726 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.417811 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.417826 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.417847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.417863 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.520727 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.521011 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.521096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.521174 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.521244 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.624391 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.624469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.624480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.624501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.624511 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.727422 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.728012 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.728120 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.728229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.728334 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.831506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.831562 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.831572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.831587 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.831597 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.934276 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.934321 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.934333 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.934353 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:34 crc kubenswrapper[4679]: I0203 12:06:34.934384 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:34Z","lastTransitionTime":"2026-02-03T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.036941 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.036979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.036988 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.037004 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.037014 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.139618 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.139682 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.139697 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.139719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.139734 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.204827 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:12:09.272012606 +0000 UTC Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.211201 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.211334 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.211221 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.211541 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.211640 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.211393 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.243481 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.243526 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.243536 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.243553 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.243565 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.346690 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.346727 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.346738 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.346753 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.346763 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.449762 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.449809 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.449822 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.449843 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.449859 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.552698 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.552747 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.552757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.552775 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.552793 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.655652 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.655707 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.655721 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.655745 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.655763 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.759248 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.759290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.759299 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.759318 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.759329 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.780207 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.780251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.780265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.780285 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.780298 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.795933 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:35Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.799895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.799940 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.799949 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.799965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.799975 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.814895 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:35Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.820047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.820558 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.820697 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.820826 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.820962 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.837727 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:35Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.845895 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.845957 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.845970 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.845998 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.846009 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.859896 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:35Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.864145 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.864181 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.864192 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.864215 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.864228 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.877062 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:35Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:35 crc kubenswrapper[4679]: E0203 12:06:35.877168 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.879723 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.879758 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.879772 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.879786 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.879799 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.983188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.983237 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.983249 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.983266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:35 crc kubenswrapper[4679]: I0203 12:06:35.983280 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:35Z","lastTransitionTime":"2026-02-03T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.041007 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:36 crc kubenswrapper[4679]: E0203 12:06:36.041288 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:36 crc kubenswrapper[4679]: E0203 12:06:36.041457 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:08.041425616 +0000 UTC m=+100.516321874 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.086080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.086122 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.086132 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.086150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.086161 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.189626 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.189680 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.189694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.189719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.189737 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.205070 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:23:45.813902066 +0000 UTC Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.211525 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:36 crc kubenswrapper[4679]: E0203 12:06:36.211737 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.292841 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.292888 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.292897 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.292914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.292924 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.395619 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.395686 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.395696 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.395711 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.395721 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.498779 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.498834 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.498847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.498867 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.498879 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.601798 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.602081 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.602158 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.602225 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.602292 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.705228 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.705567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.705659 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.705736 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.705799 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.808393 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.808456 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.808475 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.808500 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.808518 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.911642 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.911695 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.911705 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.911723 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:36 crc kubenswrapper[4679]: I0203 12:06:36.911733 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:36Z","lastTransitionTime":"2026-02-03T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.014696 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.014750 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.014759 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.014774 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.014785 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.118234 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.118572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.118649 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.118732 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.118794 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.205594 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:33:35.618669297 +0000 UTC Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.211177 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.211240 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:37 crc kubenswrapper[4679]: E0203 12:06:37.211327 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.211394 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:37 crc kubenswrapper[4679]: E0203 12:06:37.211493 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:37 crc kubenswrapper[4679]: E0203 12:06:37.211570 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.221256 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.221300 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.221313 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.221331 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.221343 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.325317 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.325379 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.325389 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.325405 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.325418 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.428290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.428320 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.428329 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.428342 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.428351 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.530851 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.530892 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.530902 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.530915 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.530925 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.633430 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.633483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.633498 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.633513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.633523 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.735903 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.736125 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.736252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.736330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.736422 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.839416 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.839465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.839477 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.839499 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.839509 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.945764 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.945819 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.945829 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.945852 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:37 crc kubenswrapper[4679]: I0203 12:06:37.945866 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:37Z","lastTransitionTime":"2026-02-03T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.048050 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.048101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.048110 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.048129 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.048141 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.150513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.150559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.150569 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.150585 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.150596 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.206153 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:12:48.68578132 +0000 UTC Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.211610 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:38 crc kubenswrapper[4679]: E0203 12:06:38.211816 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.227675 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.244244 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.253306 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.253350 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.253382 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.253402 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.253416 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.256557 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.271865 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.284954 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.298828 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.340745 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.356491 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.356537 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.356559 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.356576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.356590 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.383172 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.406657 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.422627 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.437562 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.453418 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.459243 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.459303 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.459315 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.459336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.459348 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.468148 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.480992 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.495849 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.509209 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.528324 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.543175 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.562263 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.562372 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.562385 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.562400 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.562414 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.647690 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/0.log" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.647774 4679 generic.go:334] "Generic (PLEG): container finished" podID="413e7c7d-7c01-4502-8d73-3c3df2e60956" containerID="2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0" exitCode=1 Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.647828 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerDied","Data":"2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.648846 4679 scope.go:117] "RemoveContainer" containerID="2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.664818 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.666130 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.666171 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.666187 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.666207 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.666219 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.682254 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.698777 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.713252 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.731225 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.744891 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.759834 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.769540 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.769578 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.769588 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.769628 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.769641 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.775808 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.790342 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.807949 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.827769 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.840959 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.856733 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.872783 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.872831 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.872843 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.872864 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.872875 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.874461 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.889650 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.905647 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.921615 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.936954 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:38Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.975716 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.975757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.975767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.975784 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:38 crc kubenswrapper[4679]: I0203 12:06:38.975796 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:38Z","lastTransitionTime":"2026-02-03T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.078027 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.078066 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.078077 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.078093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.078106 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.181310 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.181397 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.181407 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.181423 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.181433 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.207056 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:48:33.204459195 +0000 UTC Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.211514 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.211605 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:39 crc kubenswrapper[4679]: E0203 12:06:39.211677 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.211610 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:39 crc kubenswrapper[4679]: E0203 12:06:39.211764 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:39 crc kubenswrapper[4679]: E0203 12:06:39.211947 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.283840 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.283886 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.283898 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.283914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.283926 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.386205 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.386241 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.386250 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.386264 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.386277 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.489261 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.489309 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.489322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.489339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.489348 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.592802 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.592859 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.592869 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.592888 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.592900 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.654167 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/0.log" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.654256 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerStarted","Data":"f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.667927 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.686259 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.696096 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.696153 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.696163 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.696177 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.696210 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.704730 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.723943 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.744434 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.765110 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.777902 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.793977 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.799133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.799190 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.799199 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.799218 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.799229 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.808506 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.824229 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.837712 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.851089 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.866836 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.881411 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.895080 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.901501 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.901557 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.901568 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.901588 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.901599 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:39Z","lastTransitionTime":"2026-02-03T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.909552 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.925182 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:39 crc kubenswrapper[4679]: I0203 12:06:39.939757 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:39Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.005075 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.005131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.005145 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.005169 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.005182 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.108189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.108252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.108266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.108292 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.108302 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.207505 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:20:03.029348547 +0000 UTC Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.210796 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:40 crc kubenswrapper[4679]: E0203 12:06:40.210989 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.211010 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.211047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.211056 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.211075 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.211086 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.313878 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.313922 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.313935 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.313954 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.313967 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.417037 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.417110 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.417127 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.417152 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.417165 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.519893 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.519942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.519955 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.519971 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.519982 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.622881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.622944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.622960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.622981 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.622997 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.725438 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.725514 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.725529 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.725550 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.725564 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.828585 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.828631 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.828640 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.828661 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.828672 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.932155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.932214 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.932227 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.932251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:40 crc kubenswrapper[4679]: I0203 12:06:40.932269 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:40Z","lastTransitionTime":"2026-02-03T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.034714 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.034784 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.034797 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.034815 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.034828 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.137874 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.137923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.137936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.137957 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.137973 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.208376 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:58:42.720208992 +0000 UTC Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.210748 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.210766 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.210801 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:41 crc kubenswrapper[4679]: E0203 12:06:41.211426 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:41 crc kubenswrapper[4679]: E0203 12:06:41.211557 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:41 crc kubenswrapper[4679]: E0203 12:06:41.211748 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.241221 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.241265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.241277 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.241299 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.241312 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.344593 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.344655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.344671 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.344691 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.344708 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.446974 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.447024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.447035 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.447053 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.447066 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.549956 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.549994 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.550004 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.550021 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.550031 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.653605 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.653658 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.653667 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.653685 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.653697 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.757341 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.757423 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.757436 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.757458 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.757473 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.860527 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.860591 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.860601 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.860654 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.860666 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.963727 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.963792 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.963805 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.963829 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:41 crc kubenswrapper[4679]: I0203 12:06:41.963843 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:41Z","lastTransitionTime":"2026-02-03T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.066220 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.066257 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.066266 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.066282 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.066293 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.168424 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.168499 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.168517 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.168537 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.168547 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.208494 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:59:20.522016769 +0000 UTC Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.210972 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:42 crc kubenswrapper[4679]: E0203 12:06:42.211146 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.271129 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.271187 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.271206 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.271230 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.271244 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.373838 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.373888 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.373901 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.373920 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.373931 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.477536 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.477893 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.478012 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.478088 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.478183 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.581101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.581163 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.581173 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.581194 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.581205 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.683914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.683973 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.683988 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.684013 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.684029 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.786678 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.787087 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.787183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.787265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.787384 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.891018 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.891078 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.891093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.891113 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.891130 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.994033 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.994477 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.994576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.994681 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:42 crc kubenswrapper[4679]: I0203 12:06:42.994772 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:42Z","lastTransitionTime":"2026-02-03T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.098120 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.098172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.098188 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.098206 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.098217 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.201265 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.201348 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.201428 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.201454 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.201476 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.209372 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:53:00.356961058 +0000 UTC Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.211616 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.211637 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:43 crc kubenswrapper[4679]: E0203 12:06:43.211751 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:43 crc kubenswrapper[4679]: E0203 12:06:43.211942 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.212036 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:43 crc kubenswrapper[4679]: E0203 12:06:43.212746 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.213247 4679 scope.go:117] "RemoveContainer" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.304715 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.304779 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.304791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.304812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.304824 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.407853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.407919 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.407964 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.407986 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.408000 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.511167 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.511216 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.511225 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.511243 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.511253 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.614499 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.614576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.614588 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.614608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.614619 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.670283 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/2.log" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.673122 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.673730 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.700758 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.717655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.717703 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.717718 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.717737 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.717750 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.722338 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.736550 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.751004 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.765830 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.778548 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.796417 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.813538 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.820502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.820550 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.820562 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.820583 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.820595 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.827678 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.844483 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.868018 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.881833 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.895938 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.912855 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.923785 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.923844 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.923855 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.923879 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.923891 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:43Z","lastTransitionTime":"2026-02-03T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.932199 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.944378 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.956243 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:43 crc kubenswrapper[4679]: I0203 12:06:43.968306 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:43Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.026739 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.026780 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.026789 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.026804 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.026814 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.129881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.130470 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.130483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.130519 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.130536 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.210122 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:30:36.085883947 +0000 UTC Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.211553 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:44 crc kubenswrapper[4679]: E0203 12:06:44.211745 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.233199 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.233258 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.233274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.233292 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.233307 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.336065 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.336105 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.336116 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.336134 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.336146 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.438345 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.438407 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.438417 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.438431 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.438444 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.541024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.541068 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.541081 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.541103 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.541119 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.644225 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.644288 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.644304 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.644325 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.644337 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.678779 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/3.log" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.679733 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/2.log" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.683436 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" exitCode=1 Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.683486 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.683548 4679 scope.go:117] "RemoveContainer" containerID="a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.684850 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:06:44 crc kubenswrapper[4679]: E0203 12:06:44.685158 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.700044 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.716963 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.737572 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.747795 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.747839 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.747851 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.747868 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.747882 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.753680 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.768459 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.784386 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.798576 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.818557 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.838525 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a825d6657e9cbee335218fd87c4b0f0636140ba7488280319e91fbe26dfdd5b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:15Z\\\",\\\"message\\\":\\\"03 12:06:15.206189 6313 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0203 12:06:15.206194 6313 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nF0203 12:06:15.206121 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:15Z is after 2025-08-24T17:21:41Z]\\\\nI0203 12:06:15.206201 6313 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0203 12:06:15.206160 6313 services_controller.go:443] Built service openshift-machi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:44Z\\\",\\\"message\\\":\\\"6g in node crc\\\\nI0203 12:06:44.082545 6711 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:06:44.082570 6711 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850301 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850376 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850402 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850430 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850445 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.850987 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.864259 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.877744 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.890302 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.904145 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.918758 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.930629 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.943014 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.952881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.952933 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.952946 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.952964 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.952976 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:44Z","lastTransitionTime":"2026-02-03T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:44 crc kubenswrapper[4679]: I0203 12:06:44.957516 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:44Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.055950 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.056030 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.056042 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.056064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.056078 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.158731 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.158790 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.158804 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.158820 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.158831 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.211249 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:02:08.841946289 +0000 UTC Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.211411 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.211475 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.211673 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:45 crc kubenswrapper[4679]: E0203 12:06:45.211794 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:45 crc kubenswrapper[4679]: E0203 12:06:45.211891 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:45 crc kubenswrapper[4679]: E0203 12:06:45.211996 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.228149 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.261063 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.261111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.261124 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.261144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.261155 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.365292 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.365348 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.365375 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.365401 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.365415 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.467791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.467865 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.467881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.467902 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.467913 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.571251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.571322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.571336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.571390 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.571413 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.674011 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.674065 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.674075 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.674098 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.674114 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.689522 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/3.log" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.694129 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:06:45 crc kubenswrapper[4679]: E0203 12:06:45.694322 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.707457 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.719887 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.734434 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.750528 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.764600 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.777091 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.777141 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.777154 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.777176 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.777192 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.785151 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2626117-c04a-45c1-aeac-b5f20a78f1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c3da08082d790a6d33bd3d86d43513d13a2833f5cc0edc3f7e3abc62418b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6b3c2aa06006a68a19f3b6967b74249a3fb631c1ad2fd0660d2940807e6d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267c1aff643b3ade526a8c39f0ba7c3451d6c0d799a40deb146e97b62b771a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc745234ff5eb96ca3c95548186c514cd0a81fb3ed85f11b76dd508b0b2233b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca4414eb93e2ceb2ba2ea966534c3f85ca9f237067094ee660f5e3b9daef711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.803427 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.818135 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.839082 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:44Z\\\",\\\"message\\\":\\\"6g in node crc\\\\nI0203 12:06:44.082545 6711 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:06:44.082570 6711 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.853198 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.867456 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.879513 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.879556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.879568 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.879587 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.879600 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.882572 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.895428 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.903575 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.913955 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.922131 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.932937 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.943245 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.955023 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:45Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.982050 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.982093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.982103 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.982122 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:45 crc kubenswrapper[4679]: I0203 12:06:45.982134 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:45Z","lastTransitionTime":"2026-02-03T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.085111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.085146 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.085157 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.085174 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.085184 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.123020 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.123065 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.123073 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.123093 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.123107 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.136152 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:46Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.139749 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.139788 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.139802 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.139821 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.139834 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.150451 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:46Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.153686 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.153731 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.153741 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.153759 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.153777 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.165836 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:46Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.169080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.169111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.169120 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.169136 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.169147 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.182853 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:46Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.187172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.187217 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.187231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.187251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.187265 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.199293 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:46Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.199433 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.201450 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.201497 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.201510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.201533 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.201546 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.210944 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:46 crc kubenswrapper[4679]: E0203 12:06:46.211116 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.211796 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:24:39.93200666 +0000 UTC Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.305655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.305738 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.305751 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.305842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.305872 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.409212 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.409251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.409260 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.409278 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.409292 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.512095 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.512181 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.512197 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.512221 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.512235 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.616366 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.616480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.616498 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.616520 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.616538 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.719675 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.719735 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.719747 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.719768 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.719785 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.822830 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.822879 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.822890 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.822907 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.822918 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.926479 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.926577 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.926622 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.926662 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:46 crc kubenswrapper[4679]: I0203 12:06:46.926685 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:46Z","lastTransitionTime":"2026-02-03T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.029735 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.029805 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.029822 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.029847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.029861 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.132858 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.133189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.133271 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.133358 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.133462 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.210741 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.210869 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:47 crc kubenswrapper[4679]: E0203 12:06:47.210927 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:47 crc kubenswrapper[4679]: E0203 12:06:47.211042 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.211148 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:47 crc kubenswrapper[4679]: E0203 12:06:47.211210 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.212810 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:16:40.283676512 +0000 UTC Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.236863 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.236921 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.236933 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.236956 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.236973 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.341172 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.341214 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.341224 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.341243 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.341255 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.443979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.444043 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.444058 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.444079 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.444094 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.546675 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.546729 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.546745 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.546767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.546780 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.648970 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.649039 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.649059 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.649090 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.649115 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.752346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.752434 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.752450 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.752471 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.752488 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.855880 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.855952 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.855968 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.855994 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.856011 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.959683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.959767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.959784 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.959805 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:47 crc kubenswrapper[4679]: I0203 12:06:47.959819 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:47Z","lastTransitionTime":"2026-02-03T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.063891 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.063965 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.063993 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.064042 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.064065 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.166609 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.166687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.166703 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.166724 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.166740 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.211413 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:48 crc kubenswrapper[4679]: E0203 12:06:48.211685 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.213491 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:31:37.560658541 +0000 UTC Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.227438 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.246058 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.261220 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.269156 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.269189 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.269199 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.269441 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.269454 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.275547 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.289782 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.313535 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2626117-c04a-45c1-aeac-b5f20a78f1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c3da08082d790a6d33bd3d86d43513d13a2833f5cc0edc3f7e3abc62418b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6b3c2aa06006a68a19f3b6967b74249a3fb631c1ad2fd0660d2940807e6d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267c1aff643b3ade526a8c39f0ba7c3451d6c0d799a40deb146e97b62b771a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc745234ff5eb96ca3c95548186c514cd0a81fb3ed85f11b76dd508b0b2233b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca4414eb93e2ceb2ba2ea966534c3f85ca9f237067094ee660f5e3b9daef711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.328860 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.343952 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.368430 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:44Z\\\",\\\"message\\\":\\\"6g in node crc\\\\nI0203 12:06:44.082545 6711 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:06:44.082570 6711 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.372074 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.372145 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.372162 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.372194 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.372216 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.380808 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.398308 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.412600 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.426127 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.438088 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.451723 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.465158 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.474873 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.474959 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.474972 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.474992 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.475005 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.490928 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.506094 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.519705 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:48Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.577958 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.578014 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.578030 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.578051 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.578064 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.680735 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.680779 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.680791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.680810 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.680835 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.783659 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.783910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.784146 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.784341 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.784588 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.887604 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.887871 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.888055 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.888142 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.888230 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.991149 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.991231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.991252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.991322 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:48 crc kubenswrapper[4679]: I0203 12:06:48.991337 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:48Z","lastTransitionTime":"2026-02-03T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.093647 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.093694 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.093706 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.093726 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.093737 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.196133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.196206 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.196222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.196245 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.196259 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.211112 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.211205 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.211294 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:49 crc kubenswrapper[4679]: E0203 12:06:49.211301 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:49 crc kubenswrapper[4679]: E0203 12:06:49.211477 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:49 crc kubenswrapper[4679]: E0203 12:06:49.211569 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.214400 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:44:58.198976912 +0000 UTC Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.298848 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.298896 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.298906 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.298927 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.298956 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.402051 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.402133 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.402146 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.402165 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.402179 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.505153 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.505237 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.505257 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.505281 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.505300 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.607824 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.607865 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.607874 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.607890 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.607900 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.710953 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.711011 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.711029 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.711052 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.711067 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.814100 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.814147 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.814161 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.814179 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.814194 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.917162 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.917208 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.917219 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.917232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:49 crc kubenswrapper[4679]: I0203 12:06:49.917242 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:49Z","lastTransitionTime":"2026-02-03T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.021502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.021557 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.021573 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.021593 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.021607 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.124507 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.124572 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.124585 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.124611 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.124623 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.211383 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:50 crc kubenswrapper[4679]: E0203 12:06:50.211565 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.214511 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:57:03.019103088 +0000 UTC Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.227599 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.227655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.227671 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.227692 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.227708 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.330429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.330475 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.330487 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.330506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.330518 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.433581 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.433626 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.433638 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.433655 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.433666 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.537065 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.537126 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.537136 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.537152 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.537162 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.639892 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.639951 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.639960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.639975 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.639987 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.742229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.742280 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.742292 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.742308 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.742355 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.845528 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.845615 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.845629 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.845651 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.845683 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.948571 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.948658 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.948676 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.948717 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:50 crc kubenswrapper[4679]: I0203 12:06:50.948732 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:50Z","lastTransitionTime":"2026-02-03T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.051748 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.051808 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.051817 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.051834 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.051861 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.155480 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.155534 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.155544 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.155564 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.155578 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.211184 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.211211 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:51 crc kubenswrapper[4679]: E0203 12:06:51.211828 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.212328 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:51 crc kubenswrapper[4679]: E0203 12:06:51.212498 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.215875 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:36:36.124789492 +0000 UTC Feb 03 12:06:51 crc kubenswrapper[4679]: E0203 12:06:51.216755 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.258476 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.258647 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.258701 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.258774 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.258788 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.361685 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.361721 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.361729 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.361745 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.361757 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.464687 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.464741 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.464750 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.464768 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.464778 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.567664 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.567741 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.567754 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.567775 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.567787 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.670902 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.670959 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.670979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.671024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.671037 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.773751 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.773807 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.773818 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.773853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.773866 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.877603 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.877648 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.877660 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.877678 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.877694 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.980465 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.980529 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.980542 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.980565 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:51 crc kubenswrapper[4679]: I0203 12:06:51.980580 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:51Z","lastTransitionTime":"2026-02-03T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.083742 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.083799 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.083820 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.083842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.083856 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.134294 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.134486 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.134523 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.134557 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.134578 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134673 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.134638442 +0000 UTC m=+148.609534530 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134674 4679 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134754 4679 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134889 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134912 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134926 4679 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134995 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.135008 4679 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.135016 4679 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.134760 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.134751276 +0000 UTC m=+148.609647364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.135066 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.135042814 +0000 UTC m=+148.609939082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.135084 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.135077045 +0000 UTC m=+148.609973353 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.135095 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.135089165 +0000 UTC m=+148.609985483 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.186128 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.186184 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.186199 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.186222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.186235 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.212775 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:52 crc kubenswrapper[4679]: E0203 12:06:52.212976 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.216284 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:51:01.211467586 +0000 UTC Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.289304 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.289451 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.289471 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.289497 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.289516 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.392688 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.392739 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.392751 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.392767 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.392778 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.495923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.495963 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.495976 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.495992 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.496004 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.598842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.598924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.598936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.598960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.598973 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.701723 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.701813 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.701842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.701874 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.701922 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.804835 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.804870 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.804913 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.804930 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.804942 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.907939 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.908002 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.908016 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.908037 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:52 crc kubenswrapper[4679]: I0203 12:06:52.908051 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:52Z","lastTransitionTime":"2026-02-03T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.011477 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.011532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.011542 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.011561 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.011572 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.114549 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.114624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.114636 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.114665 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.114679 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.211590 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.211686 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.211758 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:53 crc kubenswrapper[4679]: E0203 12:06:53.211924 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:53 crc kubenswrapper[4679]: E0203 12:06:53.212134 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:53 crc kubenswrapper[4679]: E0203 12:06:53.212313 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.216501 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:25:59.072382037 +0000 UTC Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.217891 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.217936 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.217949 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.217968 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.217983 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.321991 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.322229 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.322289 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.322319 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.322340 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.425122 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.425194 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.425204 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.425222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.425234 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.527852 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.527911 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.527923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.527979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.527994 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.630824 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.630880 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.630891 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.630914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.630928 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.732995 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.733049 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.733058 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.733080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.733093 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.840053 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.840184 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.840203 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.840231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.840255 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.943726 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.943797 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.943812 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.943837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:53 crc kubenswrapper[4679]: I0203 12:06:53.943851 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:53Z","lastTransitionTime":"2026-02-03T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.047060 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.047107 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.047117 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.047137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.047147 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.150033 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.150087 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.150097 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.150119 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.150130 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.211693 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:54 crc kubenswrapper[4679]: E0203 12:06:54.211934 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.216654 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:33:25.746615718 +0000 UTC Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.253506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.253560 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.253571 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.253594 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.253609 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.356868 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.357862 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.357906 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.357935 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.357947 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.461016 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.461068 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.461080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.461099 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.461114 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.563532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.563590 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.563609 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.563633 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.563652 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.666857 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.666911 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.666923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.666941 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.666954 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.769661 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.769715 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.769725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.769748 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.769762 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.873827 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.874424 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.874443 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.874469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.874486 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.977873 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.977915 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.977924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.977944 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:54 crc kubenswrapper[4679]: I0203 12:06:54.977955 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:54Z","lastTransitionTime":"2026-02-03T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.080763 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.080823 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.080839 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.080867 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.080883 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.183977 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.184037 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.184046 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.184063 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.184089 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.211010 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.211063 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.211019 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:55 crc kubenswrapper[4679]: E0203 12:06:55.211182 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:55 crc kubenswrapper[4679]: E0203 12:06:55.211454 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:55 crc kubenswrapper[4679]: E0203 12:06:55.211542 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.217293 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:44:26.162093439 +0000 UTC Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.286458 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.286503 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.286517 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.286538 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.286553 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.389194 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.389238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.389251 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.389267 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.389280 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.492968 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.493032 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.493051 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.493083 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.493099 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.595995 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.596060 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.596070 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.596088 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.596100 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.699333 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.699423 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.699436 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.699457 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.699473 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.802353 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.802436 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.802449 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.802472 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.802485 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.905576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.905624 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.905638 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.905654 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:55 crc kubenswrapper[4679]: I0203 12:06:55.905665 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:55Z","lastTransitionTime":"2026-02-03T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.008691 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.008745 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.008753 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.008770 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.008782 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.113006 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.113089 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.113111 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.113144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.113167 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.211743 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.211959 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.216134 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.216197 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.216210 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.216235 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.216256 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.220871 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:28:12.140058107 +0000 UTC Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.319670 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.319716 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.319725 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.319743 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.319758 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.422567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.423000 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.423137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.423227 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.423316 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.496019 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.496088 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.496098 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.496117 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.496127 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.508069 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.513183 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.513232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.513244 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.513262 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.513274 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.526675 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.531511 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.531672 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.531754 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.531847 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.531915 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.545281 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.550082 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.550131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.550142 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.550162 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.550177 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.563754 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.568989 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.569037 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.569048 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.569068 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.569080 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.584158 4679 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af702107-1d4b-4aae-b3c8-60dab6d82e59\\\",\\\"systemUUID\\\":\\\"de5ca927-c183-4b52-ac09-5efe9929986a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:56Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:56 crc kubenswrapper[4679]: E0203 12:06:56.584348 4679 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.586404 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.586533 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.586605 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.586682 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.586760 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.689300 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.689349 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.689382 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.689403 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.689416 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.792150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.792207 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.792217 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.792236 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.792247 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.895901 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.895964 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.895976 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.895997 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.896010 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.998282 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.998329 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.998346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.998385 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:56 crc kubenswrapper[4679]: I0203 12:06:56.998398 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:56Z","lastTransitionTime":"2026-02-03T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.101292 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.101336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.101346 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.101380 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.101392 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.204231 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.204285 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.204297 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.204314 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.204327 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.210951 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.210984 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.211063 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:57 crc kubenswrapper[4679]: E0203 12:06:57.211200 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:57 crc kubenswrapper[4679]: E0203 12:06:57.211314 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:57 crc kubenswrapper[4679]: E0203 12:06:57.211484 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.223568 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:35:30.886277382 +0000 UTC Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.307549 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.307597 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.307609 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.307628 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.307642 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.410305 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.410396 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.410412 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.410436 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.410456 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.513212 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.513272 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.513284 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.513304 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.513319 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.616263 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.616315 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.616324 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.616344 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.616360 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.718948 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.718996 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.719008 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.719028 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.719042 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.821751 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.821804 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.821814 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.821829 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.821840 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.924009 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.924060 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.924071 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.924088 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:57 crc kubenswrapper[4679]: I0203 12:06:57.924106 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:57Z","lastTransitionTime":"2026-02-03T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.026550 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.026630 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.026649 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.026675 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.026695 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.129115 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.129191 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.129210 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.129242 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.129261 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.211712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:06:58 crc kubenswrapper[4679]: E0203 12:06:58.211895 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.224855 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:36:35.867228555 +0000 UTC Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.227173 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51af7bd4-7c1e-4a75-bf78-6f8c1cb94c54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b3690252db34b84722935ed5e124af6fd969101339718f3a991b5389bd2ca59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85681edc5ae8908110f5783620db398da6eb4507ac2dd954fcfae1e7524b21c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad0f79b8a23835a14cb06a3a1436050054e4d65231e98ea57b0049a25faf1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.231744 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.231806 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.231816 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.231835 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.231868 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.242488 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"413e7c7d-7c01-4502-8d73-3c3df2e60956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:37Z\\\",\\\"message\\\":\\\"2026-02-03T12:05:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952\\\\n2026-02-03T12:05:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_14e58b78-5774-4036-87b8-59b9ee896952 to /host/opt/cni/bin/\\\\n2026-02-03T12:05:52Z [verbose] multus-daemon started\\\\n2026-02-03T12:05:52Z [verbose] Readiness Indicator file check\\\\n2026-02-03T12:06:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqtsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.256085 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4042fbebb6df249b6208d2d5f16d87e3055030312bff57190e022f3eb871f597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ce0f85941c3cc6625d40f2b79865e935ac9e55066574d77faa99fe83636f5a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.271512 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb38298a-164d-4175-9d84-e9f199da55ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4e05e92ee173ca989791d373855cfc7e566e7be3a7017a91dac783d954393b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eba175c06e0d6da1447226bf6f6c4725b421b9bb3dc2993e8afc2ff8a84b1961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:06:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9qrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgmp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.284489 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8131a9b-483e-4678-a976-aff3b6b7f2fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74d69bb678e4f619711116af5820d18d03b1d9fd361f3a539d346a2b6304d1de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5ee13c4da375a59ac0370b6826231d73dad9e710493551be4e9c9a91f0a2518\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.305528 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2626117-c04a-45c1-aeac-b5f20a78f1d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69c3da08082d790a6d33bd3d86d43513d13a2833f5cc0edc3f7e3abc62418b54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6b3c2aa06006a68a19f3b6967b74249a3fb631c1ad2fd0660d2940807e6d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://267c1aff643b3ade526a8c39f0ba7c3451d6c0d799a40deb146e97b62b771a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc745234ff5eb96ca3c95548186c514cd0a81fb3ed85f11b76dd508b0b2233b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ca4414eb93e2ceb2ba2ea966534c3f85ca9f237067094ee660f5e3b9daef711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3707df93ec7405fcfa0c78d2594b7730569edbdd8f1bda4678efdf68aaca05f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff957c6242095dce625e0d2120ef9624fd89ab21d63b13bfc63d8c7a405475a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747864da6991bc0124eea4c57763e779c6788fed67cf90905689ee6284e39ec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.319230 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a29a48e7-7bad-4d75-b3c7-a1865c9f4df0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af5658efb668950e752970369e14eb1b7442f98c7705472116cb62adfc661fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e194b797963a17af146a9275a10bb2731806b20551688b99405e55ef44782523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33810661060d33172ac830efadf37b19b4767e11deaf8c57a9213431b39d3b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e7687b1bcfdb8b79d8fd5ef320e2c594ec682a4d14b9df3713a0846c5409ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.334340 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.334393 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.334408 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.334433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.334450 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.336827 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7f55n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65f2e85-782a-4313-b584-e3f1c9c8cf76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61dbaa8158920a469f4d8f7f7b35872decc04a2f13d9e01a735c5befb0ffec61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db49427106b28302835f5a26e2e441d0b59664ee3c75be08848b117bfd4e82c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae9763270c7115aee27d1afae26f62e777dbfb05dfe59c3443d4b7645cda5642\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9afa7ecfae41662911600ee4bbc75f26d61b7ee48871f1031782c670f850ab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfed8afdc1cb8baf0151e925ee0040f68c6361cfee4b61702400d491ea69fdf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf762065c557ae71ebfc1344082213bd4d2b92a318cfb533e58fcf134fc0c407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://578e1af79e09bfd9a84535b1a8b63ff90da92d1bc504f6e42fcc80140641663c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt8lm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7f55n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.357250 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T12:06:44Z\\\",\\\"message\\\":\\\"6g in node crc\\\\nI0203 12:06:44.082545 6711 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 12:06:44.082570 6711 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xhpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b7ws5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.369598 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e4da3-455d-4394-824c-2dfe080bc2c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:06:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr7lv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:06:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j8bgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.385879 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80bab797a2aebcf75f2cc95f312ca56acae3e927ddf29031363beeffe72dce2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.400441 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.413860 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbb1ebe3359c6bd25895cb382468053c7a80e10b355bc5c83df5adc66ef33b6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.423639 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dz6f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04ed4bc1-0ae0-4644-95d5-384077e1bcf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93ac9d9640a0bc307f442ecc1e14fe98161f20431d34aabcefd715069d62b5dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frt5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dz6f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.433763 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aac6e806e161dc117d37b7aa52a7a823cf905b2ab79416eb08f13ba0283e4b3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9r9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8qvcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.436831 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.436888 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.436899 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.436914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.436925 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.445953 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4gcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a0d56c1-f4af-457d-a63d-2bef7730f28a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba83c437ec49b2b70ecb73524092531cfe7fb6816cf6552f94c62efcae241344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cg54l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4gcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.460487 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55f95947-c090-45ca-8732-acab46870cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T12:05:41Z\\\",\\\"message\\\":\\\"W0203 12:05:31.400951 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 12:05:31.401371 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770120331 cert, and key in /tmp/serving-cert-3072246013/serving-signer.crt, /tmp/serving-cert-3072246013/serving-signer.key\\\\nI0203 12:05:31.637946 1 observer_polling.go:159] Starting file observer\\\\nW0203 12:05:31.640822 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 12:05:31.641021 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 12:05:31.643961 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3072246013/tls.crt::/tmp/serving-cert-3072246013/tls.key\\\\\\\"\\\\nF0203 12:05:41.954006 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T12:05:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T12:05:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T12:05:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.471343 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.482977 4679 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T12:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T12:06:58Z is after 2025-08-24T17:21:41Z" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.539611 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.539659 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.539673 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.539720 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.539734 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.641814 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.641900 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.641924 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.641961 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.641984 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.744347 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.744448 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.744460 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.744479 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.744489 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.847986 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.848092 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.848106 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.848131 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.848146 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.950657 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.950703 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.950722 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.950740 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:58 crc kubenswrapper[4679]: I0203 12:06:58.950752 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:58Z","lastTransitionTime":"2026-02-03T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.053591 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.053630 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.053641 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.053658 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.053668 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.156211 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.156283 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.156300 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.156329 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.156347 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.210713 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.210845 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.210870 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:06:59 crc kubenswrapper[4679]: E0203 12:06:59.210899 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:06:59 crc kubenswrapper[4679]: E0203 12:06:59.210997 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:06:59 crc kubenswrapper[4679]: E0203 12:06:59.211124 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.225949 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:26:00.492694841 +0000 UTC Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.259607 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.259714 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.259728 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.259746 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.259759 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.362051 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.362108 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.362123 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.362147 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.362166 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.465435 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.465493 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.465506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.465524 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.465576 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.568615 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.568664 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.568673 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.568692 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.568703 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.671719 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.671777 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.671792 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.671816 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.671831 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.774791 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.775280 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.775349 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.775469 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.775546 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.878607 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.878648 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.878657 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.878673 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.878683 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.980966 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.981028 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.981038 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.981059 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:06:59 crc kubenswrapper[4679]: I0203 12:06:59.981072 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:06:59Z","lastTransitionTime":"2026-02-03T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.083353 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.083446 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.083463 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.083486 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.083504 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.187152 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.187201 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.187213 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.187233 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.187246 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.210681 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:00 crc kubenswrapper[4679]: E0203 12:07:00.210847 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.227487 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:45:07.232805288 +0000 UTC Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.289837 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.289881 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.289894 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.289919 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.289936 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.392274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.392340 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.392382 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.392404 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.392417 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.494995 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.495048 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.495061 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.495082 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.495095 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.597966 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.598029 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.598043 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.598070 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.598083 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.701216 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.701270 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.701283 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.701306 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.701317 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.804021 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.804092 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.804120 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.804154 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.804179 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.906830 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.906891 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.906905 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.906925 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:00 crc kubenswrapper[4679]: I0203 12:07:00.906937 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:00Z","lastTransitionTime":"2026-02-03T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.010552 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.010623 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.010634 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.010653 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.010664 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.113563 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.113672 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.113695 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.113723 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.113748 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.210841 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.210879 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:01 crc kubenswrapper[4679]: E0203 12:07:01.210996 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.211107 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:01 crc kubenswrapper[4679]: E0203 12:07:01.211171 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:01 crc kubenswrapper[4679]: E0203 12:07:01.211314 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.212003 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:07:01 crc kubenswrapper[4679]: E0203 12:07:01.212155 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.216701 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.216747 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.216757 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.216772 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.216782 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.227947 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:38:20.978637092 +0000 UTC Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.319506 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.319564 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.319576 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.319597 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.319614 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.423522 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.423583 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.423597 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.423616 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.423632 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.527242 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.527290 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.527307 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.527328 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.527344 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.630823 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.630926 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.630941 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.630960 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.630970 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.733844 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.733886 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.733898 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.733914 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.733926 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.836789 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.836841 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.836852 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.836870 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.836882 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.940144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.940223 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.940238 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.940260 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:01 crc kubenswrapper[4679]: I0203 12:07:01.940276 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:01Z","lastTransitionTime":"2026-02-03T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.043047 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.043502 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.043608 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.043689 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.043756 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.146125 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.146179 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.146191 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.146211 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.146227 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.211692 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:02 crc kubenswrapper[4679]: E0203 12:07:02.211904 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.228273 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:17:34.340919205 +0000 UTC Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.248532 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.248573 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.248582 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.248599 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.248610 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.352411 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.352475 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.352489 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.352510 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.352525 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.454980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.455033 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.455045 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.455066 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.455079 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.557683 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.557737 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.557750 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.557769 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.557781 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.660663 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.660708 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.660718 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.660736 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.660748 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.764155 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.764219 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.764232 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.764252 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.764265 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.867054 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.867121 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.867138 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.867161 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.867175 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.970098 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.970135 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.970144 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.970159 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:02 crc kubenswrapper[4679]: I0203 12:07:02.970170 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:02Z","lastTransitionTime":"2026-02-03T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.072840 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.072911 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.072930 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.072998 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.073028 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.176024 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.176064 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.176073 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.176089 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.176099 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.211594 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.211648 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.211600 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:03 crc kubenswrapper[4679]: E0203 12:07:03.211776 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:03 crc kubenswrapper[4679]: E0203 12:07:03.211888 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:03 crc kubenswrapper[4679]: E0203 12:07:03.211958 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.229555 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:13:04.457962051 +0000 UTC Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.278761 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.278809 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.278819 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.278836 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.278846 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.381270 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.381309 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.381320 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.381338 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.381349 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.484260 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.484327 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.484339 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.484400 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.484410 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.588482 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.588556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.588577 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.588601 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.588640 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.691999 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.692076 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.692091 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.692112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.692124 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.794646 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.794717 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.794728 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.794748 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.794762 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.898341 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.898400 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.898415 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.898435 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:03 crc kubenswrapper[4679]: I0203 12:07:03.898450 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:03Z","lastTransitionTime":"2026-02-03T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.001035 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.001080 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.001101 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.001122 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.001137 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.103545 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.104078 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.104335 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.104459 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.104526 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.207539 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.207918 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.208081 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.208305 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.208431 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.210807 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:04 crc kubenswrapper[4679]: E0203 12:07:04.211123 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.230626 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:43:09.66727945 +0000 UTC Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.311471 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.311507 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.311520 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.311537 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.311549 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.414274 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.414386 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.414400 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.414418 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.414431 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.517012 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.517072 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.517086 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.517112 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.517126 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.619614 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.619663 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.619675 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.619697 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.619709 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.723086 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.723137 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.723150 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.723171 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.723188 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.825842 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.825898 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.825910 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.825929 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.825940 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.928483 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.928536 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.928546 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.928567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:04 crc kubenswrapper[4679]: I0203 12:07:04.928578 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:04Z","lastTransitionTime":"2026-02-03T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.031859 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.031909 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.031923 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.031942 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.031955 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.134670 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.134740 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.134782 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.134801 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.134818 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.211230 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.211274 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.211341 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:05 crc kubenswrapper[4679]: E0203 12:07:05.211453 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:05 crc kubenswrapper[4679]: E0203 12:07:05.211544 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:05 crc kubenswrapper[4679]: E0203 12:07:05.211853 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.231331 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:17:58.024149325 +0000 UTC Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.238145 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.238200 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.238213 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.238234 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.238248 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.341433 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.341496 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.341508 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.341528 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.341543 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.444405 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.444468 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.444482 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.444505 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.444517 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.547784 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.547843 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.547853 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.547870 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.547881 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.651218 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.651284 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.651295 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.651318 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.651347 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.754584 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.754656 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.754673 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.754690 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.754703 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.857500 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.857555 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.857567 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.857588 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.857600 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.960389 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.960429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.960438 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.960454 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:05 crc kubenswrapper[4679]: I0203 12:07:05.960465 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:05Z","lastTransitionTime":"2026-02-03T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.063138 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.063209 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.063222 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.063244 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.063258 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.165556 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.165613 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.165625 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.165640 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.165649 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.211373 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:06 crc kubenswrapper[4679]: E0203 12:07:06.211577 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.232466 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:13:22.891761578 +0000 UTC Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.269335 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.269418 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.269429 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.269448 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.269460 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.372330 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.372409 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.372420 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.372436 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.372446 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.475933 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.475980 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.475995 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.476015 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.476029 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.578979 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.579030 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.579042 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.579058 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.579083 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.644184 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.644336 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.644348 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.644387 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.644399 4679 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T12:07:06Z","lastTransitionTime":"2026-02-03T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.705666 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s"] Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.706239 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.709277 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.709434 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.710230 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.711980 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.747785 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.747748032 podStartE2EDuration="1m18.747748032s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.74662983 +0000 UTC m=+99.221525928" watchObservedRunningTime="2026-02-03 12:07:06.747748032 +0000 UTC m=+99.222644120" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.748130 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2zqm7" podStartSLOduration=77.748124212 podStartE2EDuration="1m17.748124212s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.729350886 +0000 UTC m=+99.204246984" watchObservedRunningTime="2026-02-03 12:07:06.748124212 +0000 UTC m=+99.223020300" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.776881 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.776841443 podStartE2EDuration="21.776841443s" podCreationTimestamp="2026-02-03 12:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.775730321 +0000 UTC m=+99.250626429" watchObservedRunningTime="2026-02-03 12:07:06.776841443 +0000 UTC m=+99.251737531" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.794731 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77341620-b0a5-4663-bd09-2f74f3b6cc46-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.794803 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.794854 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.794896 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77341620-b0a5-4663-bd09-2f74f3b6cc46-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.794931 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77341620-b0a5-4663-bd09-2f74f3b6cc46-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.805470 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.80544045 podStartE2EDuration="49.80544045s" podCreationTimestamp="2026-02-03 12:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.789423903 +0000 UTC m=+99.264320001" watchObservedRunningTime="2026-02-03 12:07:06.80544045 +0000 UTC m=+99.280336538" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.820871 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgmp4" podStartSLOduration=77.820849491 podStartE2EDuration="1m17.820849491s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.820761968 +0000 UTC m=+99.295658056" watchObservedRunningTime="2026-02-03 12:07:06.820849491 +0000 UTC m=+99.295745579" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.852606 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.852586798 podStartE2EDuration="33.852586798s" podCreationTimestamp="2026-02-03 12:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.837439835 +0000 UTC m=+99.312335943" watchObservedRunningTime="2026-02-03 12:07:06.852586798 +0000 UTC m=+99.327482886" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.894527 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7f55n" podStartSLOduration=77.894500285 podStartE2EDuration="1m17.894500285s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:06.894470485 +0000 UTC m=+99.369366583" watchObservedRunningTime="2026-02-03 12:07:06.894500285 +0000 UTC m=+99.369396373" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.895998 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77341620-b0a5-4663-bd09-2f74f3b6cc46-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896087 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896173 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896220 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896286 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77341620-b0a5-4663-bd09-2f74f3b6cc46-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896308 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77341620-b0a5-4663-bd09-2f74f3b6cc46-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.896334 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77341620-b0a5-4663-bd09-2f74f3b6cc46-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.897343 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77341620-b0a5-4663-bd09-2f74f3b6cc46-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.916565 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77341620-b0a5-4663-bd09-2f74f3b6cc46-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:06 crc kubenswrapper[4679]: I0203 12:07:06.919414 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77341620-b0a5-4663-bd09-2f74f3b6cc46-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fhk2s\" (UID: \"77341620-b0a5-4663-bd09-2f74f3b6cc46\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.020463 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" Feb 03 12:07:07 crc kubenswrapper[4679]: W0203 12:07:07.039074 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77341620_b0a5_4663_bd09_2f74f3b6cc46.slice/crio-9da02ddc23c7f3a45affe55a733e84dd8e30aef7d0f3d69d88f976af3dd6e959 WatchSource:0}: Error finding container 9da02ddc23c7f3a45affe55a733e84dd8e30aef7d0f3d69d88f976af3dd6e959: Status 404 returned error can't find the container with id 9da02ddc23c7f3a45affe55a733e84dd8e30aef7d0f3d69d88f976af3dd6e959 Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.085244 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dz6f8" podStartSLOduration=78.085221966 podStartE2EDuration="1m18.085221966s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.08398503 +0000 UTC m=+99.558881118" watchObservedRunningTime="2026-02-03 12:07:07.085221966 +0000 UTC m=+99.560118054" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.100556 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podStartSLOduration=78.100532293 podStartE2EDuration="1m18.100532293s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.099997558 +0000 UTC m=+99.574893666" watchObservedRunningTime="2026-02-03 12:07:07.100532293 +0000 UTC m=+99.575428381" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.113270 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n4gcf" podStartSLOduration=78.113246137 podStartE2EDuration="1m18.113246137s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.112866916 +0000 UTC m=+99.587763034" watchObservedRunningTime="2026-02-03 12:07:07.113246137 +0000 UTC m=+99.588142225" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.211743 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.211847 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:07 crc kubenswrapper[4679]: E0203 12:07:07.211976 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.212064 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:07 crc kubenswrapper[4679]: E0203 12:07:07.212292 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:07 crc kubenswrapper[4679]: E0203 12:07:07.212597 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.233543 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:01:42.517268142 +0000 UTC Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.233649 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.243078 4679 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.774289 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" event={"ID":"77341620-b0a5-4663-bd09-2f74f3b6cc46","Type":"ContainerStarted","Data":"eca2dba3d0bf286f22ef00bb88cda04d23547e32b2d6bf46794e92d868fffc30"} Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.774375 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" event={"ID":"77341620-b0a5-4663-bd09-2f74f3b6cc46","Type":"ContainerStarted","Data":"9da02ddc23c7f3a45affe55a733e84dd8e30aef7d0f3d69d88f976af3dd6e959"} Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.790147 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.790120049 podStartE2EDuration="1m21.790120049s" podCreationTimestamp="2026-02-03 12:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.13085478 +0000 UTC m=+99.605750868" watchObservedRunningTime="2026-02-03 12:07:07.790120049 +0000 UTC m=+100.265016137" Feb 03 12:07:07 crc kubenswrapper[4679]: I0203 12:07:07.790418 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fhk2s" podStartSLOduration=78.790409918 podStartE2EDuration="1m18.790409918s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:07.789823211 +0000 UTC m=+100.264719299" watchObservedRunningTime="2026-02-03 12:07:07.790409918 +0000 UTC m=+100.265306006" Feb 03 12:07:08 crc kubenswrapper[4679]: I0203 12:07:08.109552 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:08 crc kubenswrapper[4679]: E0203 12:07:08.109781 4679 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:07:08 crc kubenswrapper[4679]: E0203 12:07:08.109873 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs podName:ba5e4da3-455d-4394-824c-2dfe080bc2c5 nodeName:}" failed. No retries permitted until 2026-02-03 12:08:12.109853976 +0000 UTC m=+164.584750064 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs") pod "network-metrics-daemon-j8bgc" (UID: "ba5e4da3-455d-4394-824c-2dfe080bc2c5") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 12:07:08 crc kubenswrapper[4679]: I0203 12:07:08.211115 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:08 crc kubenswrapper[4679]: E0203 12:07:08.212851 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:09 crc kubenswrapper[4679]: I0203 12:07:09.211702 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:09 crc kubenswrapper[4679]: I0203 12:07:09.211759 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:09 crc kubenswrapper[4679]: I0203 12:07:09.211731 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:09 crc kubenswrapper[4679]: E0203 12:07:09.212026 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:09 crc kubenswrapper[4679]: E0203 12:07:09.212118 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:09 crc kubenswrapper[4679]: E0203 12:07:09.212346 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:10 crc kubenswrapper[4679]: I0203 12:07:10.211393 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:10 crc kubenswrapper[4679]: E0203 12:07:10.211606 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:11 crc kubenswrapper[4679]: I0203 12:07:11.211183 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:11 crc kubenswrapper[4679]: I0203 12:07:11.211219 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:11 crc kubenswrapper[4679]: I0203 12:07:11.211183 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:11 crc kubenswrapper[4679]: E0203 12:07:11.211343 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:11 crc kubenswrapper[4679]: E0203 12:07:11.211482 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:11 crc kubenswrapper[4679]: E0203 12:07:11.211600 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:12 crc kubenswrapper[4679]: I0203 12:07:12.210796 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:12 crc kubenswrapper[4679]: E0203 12:07:12.211213 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:13 crc kubenswrapper[4679]: I0203 12:07:13.210916 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:13 crc kubenswrapper[4679]: I0203 12:07:13.210916 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:13 crc kubenswrapper[4679]: E0203 12:07:13.211097 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:13 crc kubenswrapper[4679]: I0203 12:07:13.210940 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:13 crc kubenswrapper[4679]: E0203 12:07:13.211273 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:13 crc kubenswrapper[4679]: E0203 12:07:13.211483 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:14 crc kubenswrapper[4679]: I0203 12:07:14.211354 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:14 crc kubenswrapper[4679]: E0203 12:07:14.211811 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:14 crc kubenswrapper[4679]: I0203 12:07:14.212092 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:07:14 crc kubenswrapper[4679]: E0203 12:07:14.212286 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b7ws5_openshift-ovn-kubernetes(b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" Feb 03 12:07:15 crc kubenswrapper[4679]: I0203 12:07:15.211654 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:15 crc kubenswrapper[4679]: I0203 12:07:15.211729 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:15 crc kubenswrapper[4679]: I0203 12:07:15.211789 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:15 crc kubenswrapper[4679]: E0203 12:07:15.211854 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:15 crc kubenswrapper[4679]: E0203 12:07:15.212216 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:15 crc kubenswrapper[4679]: E0203 12:07:15.212275 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:16 crc kubenswrapper[4679]: I0203 12:07:16.211678 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:16 crc kubenswrapper[4679]: E0203 12:07:16.212291 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:17 crc kubenswrapper[4679]: I0203 12:07:17.211071 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:17 crc kubenswrapper[4679]: I0203 12:07:17.211173 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:17 crc kubenswrapper[4679]: E0203 12:07:17.211225 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:17 crc kubenswrapper[4679]: I0203 12:07:17.211203 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:17 crc kubenswrapper[4679]: E0203 12:07:17.211452 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:17 crc kubenswrapper[4679]: E0203 12:07:17.211350 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:18 crc kubenswrapper[4679]: I0203 12:07:18.211544 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:18 crc kubenswrapper[4679]: E0203 12:07:18.212613 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:19 crc kubenswrapper[4679]: I0203 12:07:19.211471 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:19 crc kubenswrapper[4679]: I0203 12:07:19.211509 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:19 crc kubenswrapper[4679]: I0203 12:07:19.211561 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:19 crc kubenswrapper[4679]: E0203 12:07:19.211720 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:19 crc kubenswrapper[4679]: E0203 12:07:19.211792 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:19 crc kubenswrapper[4679]: E0203 12:07:19.211925 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:20 crc kubenswrapper[4679]: I0203 12:07:20.211478 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:20 crc kubenswrapper[4679]: E0203 12:07:20.211610 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:21 crc kubenswrapper[4679]: I0203 12:07:21.211275 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:21 crc kubenswrapper[4679]: I0203 12:07:21.211349 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:21 crc kubenswrapper[4679]: E0203 12:07:21.211486 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:21 crc kubenswrapper[4679]: I0203 12:07:21.211595 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:21 crc kubenswrapper[4679]: E0203 12:07:21.211802 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:21 crc kubenswrapper[4679]: E0203 12:07:21.211861 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:22 crc kubenswrapper[4679]: I0203 12:07:22.211852 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:22 crc kubenswrapper[4679]: E0203 12:07:22.212068 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:23 crc kubenswrapper[4679]: I0203 12:07:23.211559 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:23 crc kubenswrapper[4679]: I0203 12:07:23.211613 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:23 crc kubenswrapper[4679]: I0203 12:07:23.211565 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:23 crc kubenswrapper[4679]: E0203 12:07:23.211745 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:23 crc kubenswrapper[4679]: E0203 12:07:23.211899 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:23 crc kubenswrapper[4679]: E0203 12:07:23.212018 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.211344 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:24 crc kubenswrapper[4679]: E0203 12:07:24.211550 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.831727 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/1.log" Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.832584 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/0.log" Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.832670 4679 generic.go:334] "Generic (PLEG): container finished" podID="413e7c7d-7c01-4502-8d73-3c3df2e60956" containerID="f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c" exitCode=1 Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.832728 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerDied","Data":"f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c"} Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.832787 4679 scope.go:117] "RemoveContainer" containerID="2c5da52434731c43df4db413ef297ad5fcc27c81f0664186a54c5deba06ffab0" Feb 03 12:07:24 crc kubenswrapper[4679]: I0203 12:07:24.833430 4679 scope.go:117] "RemoveContainer" containerID="f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c" Feb 03 12:07:24 crc kubenswrapper[4679]: E0203 12:07:24.833661 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2zqm7_openshift-multus(413e7c7d-7c01-4502-8d73-3c3df2e60956)\"" pod="openshift-multus/multus-2zqm7" podUID="413e7c7d-7c01-4502-8d73-3c3df2e60956" Feb 03 12:07:25 crc kubenswrapper[4679]: I0203 12:07:25.210985 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:25 crc kubenswrapper[4679]: I0203 12:07:25.211027 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:25 crc kubenswrapper[4679]: I0203 12:07:25.211106 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:25 crc kubenswrapper[4679]: E0203 12:07:25.211184 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:25 crc kubenswrapper[4679]: E0203 12:07:25.211291 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:25 crc kubenswrapper[4679]: E0203 12:07:25.211510 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:25 crc kubenswrapper[4679]: I0203 12:07:25.838810 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/1.log" Feb 03 12:07:26 crc kubenswrapper[4679]: I0203 12:07:26.210876 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:26 crc kubenswrapper[4679]: E0203 12:07:26.211085 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:27 crc kubenswrapper[4679]: I0203 12:07:27.211712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:27 crc kubenswrapper[4679]: I0203 12:07:27.211712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:27 crc kubenswrapper[4679]: E0203 12:07:27.211913 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:27 crc kubenswrapper[4679]: I0203 12:07:27.212008 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:27 crc kubenswrapper[4679]: E0203 12:07:27.212168 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:27 crc kubenswrapper[4679]: E0203 12:07:27.212214 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:28 crc kubenswrapper[4679]: I0203 12:07:28.210948 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:28 crc kubenswrapper[4679]: E0203 12:07:28.212873 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:28 crc kubenswrapper[4679]: I0203 12:07:28.213396 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:07:28 crc kubenswrapper[4679]: E0203 12:07:28.224502 4679 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 12:07:28 crc kubenswrapper[4679]: E0203 12:07:28.329485 4679 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 12:07:28 crc kubenswrapper[4679]: I0203 12:07:28.858158 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/3.log" Feb 03 12:07:28 crc kubenswrapper[4679]: I0203 12:07:28.862942 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerStarted","Data":"94e6a70bd4da159c97ae0c870f9413fed3101f98bc4371806ba39bc586b88a66"} Feb 03 12:07:28 crc kubenswrapper[4679]: I0203 12:07:28.863753 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.160962 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podStartSLOduration=100.16093849 podStartE2EDuration="1m40.16093849s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:28.898322936 +0000 UTC m=+121.373219044" watchObservedRunningTime="2026-02-03 12:07:29.16093849 +0000 UTC m=+121.635834578" Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.162174 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j8bgc"] Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.162314 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:29 crc kubenswrapper[4679]: E0203 12:07:29.162442 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.211431 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.211481 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:29 crc kubenswrapper[4679]: I0203 12:07:29.211457 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:29 crc kubenswrapper[4679]: E0203 12:07:29.211615 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:29 crc kubenswrapper[4679]: E0203 12:07:29.211781 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:29 crc kubenswrapper[4679]: E0203 12:07:29.211903 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:31 crc kubenswrapper[4679]: I0203 12:07:31.211130 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:31 crc kubenswrapper[4679]: I0203 12:07:31.211172 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:31 crc kubenswrapper[4679]: I0203 12:07:31.211148 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:31 crc kubenswrapper[4679]: I0203 12:07:31.211152 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:31 crc kubenswrapper[4679]: E0203 12:07:31.211316 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:31 crc kubenswrapper[4679]: E0203 12:07:31.211442 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:31 crc kubenswrapper[4679]: E0203 12:07:31.211488 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:31 crc kubenswrapper[4679]: E0203 12:07:31.211532 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:33 crc kubenswrapper[4679]: I0203 12:07:33.211141 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:33 crc kubenswrapper[4679]: I0203 12:07:33.211200 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:33 crc kubenswrapper[4679]: I0203 12:07:33.211265 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:33 crc kubenswrapper[4679]: I0203 12:07:33.211240 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:33 crc kubenswrapper[4679]: E0203 12:07:33.211322 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:33 crc kubenswrapper[4679]: E0203 12:07:33.211500 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:33 crc kubenswrapper[4679]: E0203 12:07:33.211567 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:33 crc kubenswrapper[4679]: E0203 12:07:33.211760 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:33 crc kubenswrapper[4679]: E0203 12:07:33.331735 4679 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 12:07:35 crc kubenswrapper[4679]: I0203 12:07:35.211328 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:35 crc kubenswrapper[4679]: I0203 12:07:35.211336 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:35 crc kubenswrapper[4679]: E0203 12:07:35.211879 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:35 crc kubenswrapper[4679]: I0203 12:07:35.211418 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:35 crc kubenswrapper[4679]: I0203 12:07:35.211416 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:35 crc kubenswrapper[4679]: E0203 12:07:35.212002 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:35 crc kubenswrapper[4679]: E0203 12:07:35.212073 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:35 crc kubenswrapper[4679]: E0203 12:07:35.212247 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:36 crc kubenswrapper[4679]: I0203 12:07:36.211827 4679 scope.go:117] "RemoveContainer" containerID="f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c" Feb 03 12:07:36 crc kubenswrapper[4679]: I0203 12:07:36.893759 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/1.log" Feb 03 12:07:36 crc kubenswrapper[4679]: I0203 12:07:36.893841 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerStarted","Data":"f734d03952e6546980c7e8006be19bad9093b7855a66f5543811cbe8f0ff2a53"} Feb 03 12:07:37 crc kubenswrapper[4679]: I0203 12:07:37.211502 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:37 crc kubenswrapper[4679]: I0203 12:07:37.211543 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:37 crc kubenswrapper[4679]: I0203 12:07:37.211556 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:37 crc kubenswrapper[4679]: I0203 12:07:37.211600 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:37 crc kubenswrapper[4679]: E0203 12:07:37.211705 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 12:07:37 crc kubenswrapper[4679]: E0203 12:07:37.211891 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j8bgc" podUID="ba5e4da3-455d-4394-824c-2dfe080bc2c5" Feb 03 12:07:37 crc kubenswrapper[4679]: E0203 12:07:37.211979 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 12:07:37 crc kubenswrapper[4679]: E0203 12:07:37.212073 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.211427 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.211566 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.211588 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.211589 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.215124 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.215405 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.215503 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.216884 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.217219 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 12:07:39 crc kubenswrapper[4679]: I0203 12:07:39.217336 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.466452 4679 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.507938 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.508529 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.509923 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.510631 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.514255 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpn7j"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.518577 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.524242 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.532432 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.532755 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.533379 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.533729 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534021 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534195 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534430 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534448 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534635 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.534746 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.535779 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cshmm"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.536021 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.536419 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lb6b"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.536810 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.537202 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.537437 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-crc98"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.537764 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.540741 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.542393 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.543246 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.548008 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.548709 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.548911 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.549241 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552004 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552068 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552241 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552457 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552479 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552659 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552798 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.552920 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.554085 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.554856 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.558440 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.559008 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hgzn9"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.559668 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.560189 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.560734 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.561020 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.561290 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.561382 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.567043 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.567392 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.569955 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.570180 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.570419 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.570631 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.570843 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.571029 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.572034 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.572740 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573258 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573512 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573651 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573713 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573780 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573962 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574045 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574069 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75e7133e-70dc-4896-bac7-d159e39737c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574119 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574153 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8l69\" (UniqueName: \"kubernetes.io/projected/75e7133e-70dc-4896-bac7-d159e39737c1-kube-api-access-m8l69\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574182 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ddl\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-kube-api-access-42ddl\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574210 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-dir\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574239 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4002d3d-e043-4b02-960a-56c42232eaff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574263 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574280 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhn2\" (UniqueName: \"kubernetes.io/projected/94240535-d98a-4d60-8911-55e1b7cdc76c-kube-api-access-qrhn2\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574319 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-policies\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574349 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttplg\" (UniqueName: \"kubernetes.io/projected/05d40b59-1a0e-4684-a745-a0c1fb40245b-kube-api-access-ttplg\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574446 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kgqb\" (UniqueName: \"kubernetes.io/projected/45463619-c5a3-479b-9253-d3745c0d20d3-kube-api-access-5kgqb\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574473 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-client\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574495 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wr6\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-kube-api-access-s6wr6\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574524 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d40b59-1a0e-4684-a745-a0c1fb40245b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574552 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574600 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574499 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.573512 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574610 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574652 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574685 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-config\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574728 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff564b9-b11a-4642-a931-fdb8e1c63872-trusted-ca\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574790 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574840 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-encryption-config\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574871 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574897 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94240535-d98a-4d60-8911-55e1b7cdc76c-serving-cert\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574941 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4002d3d-e043-4b02-960a-56c42232eaff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.574988 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575014 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575078 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575076 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d40b59-1a0e-4684-a745-a0c1fb40245b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575158 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghssg\" (UniqueName: \"kubernetes.io/projected/d3b48b2e-6257-4121-84b8-967ff424f8b0-kube-api-access-ghssg\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575194 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575218 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575257 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jn8b\" (UniqueName: \"kubernetes.io/projected/f5830f6c-b0bf-454f-8726-8093c1b8c337-kube-api-access-2jn8b\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575221 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575295 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f33a63-9f35-4b1a-aed2-067b1b909028-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575325 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-config\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575484 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575547 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575621 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575742 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575965 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576030 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576041 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576138 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576206 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576351 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576532 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.576683 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.575352 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64rc\" (UniqueName: \"kubernetes.io/projected/915772ff-e239-46f4-931b-420de4ee4012-kube-api-access-v64rc\") pod \"downloads-7954f5f757-hgzn9\" (UID: \"915772ff-e239-46f4-931b-420de4ee4012\") " pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579405 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit-dir\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579446 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3b48b2e-6257-4121-84b8-967ff424f8b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579469 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579494 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-image-import-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579515 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-serving-cert\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579537 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-encryption-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579562 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-trusted-ca\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579608 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45463619-c5a3-479b-9253-d3745c0d20d3-serving-cert\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579638 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff564b9-b11a-4642-a931-fdb8e1c63872-metrics-tls\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579661 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579689 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579712 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjv4\" (UniqueName: \"kubernetes.io/projected/0c6e390a-1a72-4a18-91a6-436752c1eb9a-kube-api-access-wxjv4\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579733 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2mz\" (UniqueName: \"kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-images\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579780 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-client\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579801 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579822 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.579845 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-config\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580047 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-node-pullsecrets\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580072 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-serving-cert\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580107 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580146 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580169 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580208 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6z7\" (UniqueName: \"kubernetes.io/projected/69f33a63-9f35-4b1a-aed2-067b1b909028-kube-api-access-jp6z7\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.580525 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.614794 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.617015 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.619484 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.619538 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.619716 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.619775 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.620111 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.620652 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.621139 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.621407 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.621634 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.621776 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.621938 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.653685 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.653919 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.654092 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.654981 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.655601 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.655912 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.655971 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.656093 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.656111 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.655933 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.656202 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-86rk7"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.657020 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.657729 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.658467 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-z8sz6"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.659225 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.660498 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.666348 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667127 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667271 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qh27"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667675 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667885 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667976 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.667988 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.669479 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.670757 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.671899 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.673087 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.677632 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.679011 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.679693 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.680900 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681220 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681258 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681302 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jn8b\" (UniqueName: \"kubernetes.io/projected/f5830f6c-b0bf-454f-8726-8093c1b8c337-kube-api-access-2jn8b\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681325 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f33a63-9f35-4b1a-aed2-067b1b909028-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681344 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-config\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681386 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit-dir\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681407 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64rc\" (UniqueName: \"kubernetes.io/projected/915772ff-e239-46f4-931b-420de4ee4012-kube-api-access-v64rc\") pod \"downloads-7954f5f757-hgzn9\" (UID: \"915772ff-e239-46f4-931b-420de4ee4012\") " pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681428 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681759 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3b48b2e-6257-4121-84b8-967ff424f8b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.682928 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.682970 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683020 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-image-import-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683063 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45463619-c5a3-479b-9253-d3745c0d20d3-serving-cert\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681829 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683284 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683116 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-config\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.682435 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683116 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-serving-cert\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: E0203 12:07:47.683275 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.183258815 +0000 UTC m=+140.658154903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683561 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-encryption-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681875 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683633 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-trusted-ca\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683655 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683665 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683700 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff564b9-b11a-4642-a931-fdb8e1c63872-metrics-tls\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683722 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683745 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683764 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjv4\" (UniqueName: \"kubernetes.io/projected/0c6e390a-1a72-4a18-91a6-436752c1eb9a-kube-api-access-wxjv4\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683783 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2mz\" (UniqueName: \"kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683831 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-images\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683851 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-client\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683873 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683897 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-config\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683957 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-node-pullsecrets\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683974 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-serving-cert\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.683999 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684030 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684052 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684082 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684118 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp6z7\" (UniqueName: \"kubernetes.io/projected/69f33a63-9f35-4b1a-aed2-067b1b909028-kube-api-access-jp6z7\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684163 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75e7133e-70dc-4896-bac7-d159e39737c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684189 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8l69\" (UniqueName: \"kubernetes.io/projected/75e7133e-70dc-4896-bac7-d159e39737c1-kube-api-access-m8l69\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684211 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ddl\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-kube-api-access-42ddl\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684236 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684265 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-dir\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684293 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4002d3d-e043-4b02-960a-56c42232eaff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684320 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.684643 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.685003 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-node-pullsecrets\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.685036 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.681794 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit-dir\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.685692 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-image-import-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.686995 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.687384 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.687775 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94240535-d98a-4d60-8911-55e1b7cdc76c-trusted-ca\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.691487 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.691712 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.691902 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.692102 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.692245 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.692471 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.692644 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.693199 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.695242 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-config\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.698447 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.698681 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.698832 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.702320 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/75e7133e-70dc-4896-bac7-d159e39737c1-images\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.702989 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.703280 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-etcd-client\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.703569 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-serving-cert\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.705111 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4002d3d-e043-4b02-960a-56c42232eaff-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.705173 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-dir\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.705877 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f33a63-9f35-4b1a-aed2-067b1b909028-auth-proxy-config\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.706722 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.708427 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747723 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747852 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhn2\" (UniqueName: \"kubernetes.io/projected/94240535-d98a-4d60-8911-55e1b7cdc76c-kube-api-access-qrhn2\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747900 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lt2\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-policies\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747962 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d40b59-1a0e-4684-a745-a0c1fb40245b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.747986 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttplg\" (UniqueName: \"kubernetes.io/projected/05d40b59-1a0e-4684-a745-a0c1fb40245b-kube-api-access-ttplg\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748008 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kgqb\" (UniqueName: \"kubernetes.io/projected/45463619-c5a3-479b-9253-d3745c0d20d3-kube-api-access-5kgqb\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748032 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-client\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748059 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6wr6\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-kube-api-access-s6wr6\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748085 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748104 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-config\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748131 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff564b9-b11a-4642-a931-fdb8e1c63872-trusted-ca\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748171 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748202 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748246 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748279 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-encryption-config\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748306 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94240535-d98a-4d60-8911-55e1b7cdc76c-serving-cert\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748374 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4002d3d-e043-4b02-960a-56c42232eaff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748430 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d40b59-1a0e-4684-a745-a0c1fb40245b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748454 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghssg\" (UniqueName: \"kubernetes.io/projected/d3b48b2e-6257-4121-84b8-967ff424f8b0-kube-api-access-ghssg\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748479 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748648 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-serving-cert\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.748724 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.749077 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.749094 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.749278 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9ff564b9-b11a-4642-a931-fdb8e1c63872-metrics-tls\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.749641 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f33a63-9f35-4b1a-aed2-067b1b909028-machine-approver-tls\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.749791 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.750001 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-encryption-config\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.750224 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3b48b2e-6257-4121-84b8-967ff424f8b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.750297 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/75e7133e-70dc-4896-bac7-d159e39737c1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.750671 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45463619-c5a3-479b-9253-d3745c0d20d3-serving-cert\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.750685 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.752134 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-audit\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.753685 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.754013 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.754083 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff564b9-b11a-4642-a931-fdb8e1c63872-trusted-ca\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.754282 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-serving-ca\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.754390 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.754623 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.755222 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c6e390a-1a72-4a18-91a6-436752c1eb9a-audit-policies\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.755379 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05d40b59-1a0e-4684-a745-a0c1fb40245b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.755522 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45463619-c5a3-479b-9253-d3745c0d20d3-config\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.755609 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.758261 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0c6e390a-1a72-4a18-91a6-436752c1eb9a-encryption-config\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.759250 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.759467 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.761085 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.761252 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.762304 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wlhws"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.766004 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05d40b59-1a0e-4684-a745-a0c1fb40245b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.766744 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5830f6c-b0bf-454f-8726-8093c1b8c337-etcd-client\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.766980 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4002d3d-e043-4b02-960a-56c42232eaff-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.768824 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94240535-d98a-4d60-8911-55e1b7cdc76c-serving-cert\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.776560 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.776785 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5830f6c-b0bf-454f-8726-8093c1b8c337-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.776991 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.777534 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.777626 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.777547 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.778574 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cshmm"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.778708 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.780948 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kx774"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.781718 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.784133 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.784574 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.788015 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95pqf"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.788966 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.790230 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.790994 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.791083 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.792578 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.794009 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.795554 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bpsjq"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.796615 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.797042 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpn7j"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.798026 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.798919 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.799052 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.800033 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.800097 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.800590 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.801692 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lb6b"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.803204 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-crc98"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.804003 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wlhws"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.804975 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.805952 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.806691 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.807580 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hgzn9"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.808780 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.809809 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.810764 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.810984 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.812726 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w9prt"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.813332 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.813700 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.814713 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.815714 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mlnk"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.817030 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.817172 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.817819 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.818933 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.819944 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.822110 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.823093 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kx774"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.824168 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.825252 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.826316 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.827740 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7hz4q"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.828754 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.829109 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4nt88"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.829936 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.830950 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.831283 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.833008 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qh27"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.834048 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.835399 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mlnk"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.836838 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.838167 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95pqf"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.839439 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.840858 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-86rk7"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.848257 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7hz4q"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.848947 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850049 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850321 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-serving-cert\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850388 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgtm\" (UniqueName: \"kubernetes.io/projected/48fde74a-b7fe-41bd-a125-471a4b5fe72b-kube-api-access-vtgtm\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850457 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lt2\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850572 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lfq\" (UniqueName: \"kubernetes.io/projected/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-kube-api-access-m7lfq\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850610 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48fde74a-b7fe-41bd-a125-471a4b5fe72b-proxy-tls\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850686 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f672a8-75fd-4d4e-ada3-ba954aafde63-config\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850726 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850762 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850801 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.850971 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851013 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851061 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851150 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7f672a8-75fd-4d4e-ada3-ba954aafde63-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851186 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12ee21f9-5470-453b-b807-199ac9c87cd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: E0203 12:07:47.851287 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.351245198 +0000 UTC m=+140.826141306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851551 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851778 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851835 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtqvf\" (UniqueName: \"kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851876 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48fde74a-b7fe-41bd-a125-471a4b5fe72b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851942 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnc8\" (UniqueName: \"kubernetes.io/projected/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-kube-api-access-tdnc8\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.851990 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.852541 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.852942 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.852951 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.853039 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.853140 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.853428 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.853695 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.853994 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ee21f9-5470-453b-b807-199ac9c87cd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854410 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854469 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854514 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854554 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854671 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854905 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854944 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-config\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.854957 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855036 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96g4\" (UniqueName: \"kubernetes.io/projected/073fba4d-77a8-4bb5-9bef-f3acd194e9ee-kube-api-access-g96g4\") pod \"migrator-59844c95c7-fsmcz\" (UID: \"073fba4d-77a8-4bb5-9bef-f3acd194e9ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855086 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969330b9-18aa-4a19-908c-f2acf32431cb-service-ca-bundle\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855130 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpjj\" (UniqueName: \"kubernetes.io/projected/969330b9-18aa-4a19-908c-f2acf32431cb-kube-api-access-nbpjj\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855294 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5wbs\" (UniqueName: \"kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855330 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855377 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855400 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855425 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855506 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855581 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855624 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855648 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-default-certificate\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855677 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-service-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855740 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855823 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-stats-auth\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.855871 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvwh\" (UniqueName: \"kubernetes.io/projected/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-kube-api-access-8mvwh\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856020 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856059 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8p5w\" (UniqueName: \"kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856086 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856114 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-client\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856145 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-metrics-certs\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856209 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f672a8-75fd-4d4e-ada3-ba954aafde63-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856241 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ee21f9-5470-453b-b807-199ac9c87cd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856271 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856315 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bpsjq"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856343 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856397 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856429 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.856511 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.858798 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.859522 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.859607 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.859828 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.863504 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4nt88"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.865975 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.871495 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.891508 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.910529 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.930874 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.951204 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958260 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958324 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958387 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-serving-cert\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958419 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgtm\" (UniqueName: \"kubernetes.io/projected/48fde74a-b7fe-41bd-a125-471a4b5fe72b-kube-api-access-vtgtm\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958464 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48fde74a-b7fe-41bd-a125-471a4b5fe72b-proxy-tls\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958623 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88clj\" (UniqueName: \"kubernetes.io/projected/c45a77f4-45d7-4a67-90b0-086075deecbe-kube-api-access-88clj\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958663 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-socket-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958689 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrht\" (UniqueName: \"kubernetes.io/projected/6103f0a8-e747-4283-b983-58871373e22d-kube-api-access-wxrht\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958745 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f672a8-75fd-4d4e-ada3-ba954aafde63-config\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958773 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958861 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.958913 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-mountpoint-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959028 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhkb\" (UniqueName: \"kubernetes.io/projected/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-kube-api-access-skhkb\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959059 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-images\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959092 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959114 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-config\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959155 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959179 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959196 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959226 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hr2w\" (UniqueName: \"kubernetes.io/projected/a3359841-c69d-4d99-ad68-12c48aa5e044-kube-api-access-2hr2w\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959253 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959276 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959299 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-registration-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959342 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2khx4\" (UniqueName: \"kubernetes.io/projected/76e46c81-70dc-463c-b1f0-523885b31458-kube-api-access-2khx4\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959392 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959413 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7f672a8-75fd-4d4e-ada3-ba954aafde63-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959459 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtqvf\" (UniqueName: \"kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959480 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48fde74a-b7fe-41bd-a125-471a4b5fe72b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959505 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnc8\" (UniqueName: \"kubernetes.io/projected/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-kube-api-access-tdnc8\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959521 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959542 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959563 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959584 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959602 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959624 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f978f1ad-1273-42b0-8527-10f691a14389-proxy-tls\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959643 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft6c\" (UniqueName: \"kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959676 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ee21f9-5470-453b-b807-199ac9c87cd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959707 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959747 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959767 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959783 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959801 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959782 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f672a8-75fd-4d4e-ada3-ba954aafde63-config\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959821 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96g4\" (UniqueName: \"kubernetes.io/projected/073fba4d-77a8-4bb5-9bef-f3acd194e9ee-kube-api-access-g96g4\") pod \"migrator-59844c95c7-fsmcz\" (UID: \"073fba4d-77a8-4bb5-9bef-f3acd194e9ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" Feb 03 12:07:47 crc kubenswrapper[4679]: E0203 12:07:47.959874 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.459856078 +0000 UTC m=+140.934752236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959908 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbpjj\" (UniqueName: \"kubernetes.io/projected/969330b9-18aa-4a19-908c-f2acf32431cb-kube-api-access-nbpjj\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959939 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.959973 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5wbs\" (UniqueName: \"kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960002 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960037 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960119 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960153 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960173 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960193 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960212 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960229 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnc5h\" (UniqueName: \"kubernetes.io/projected/ff80d152-43ab-4161-bf9c-e2e9b6a91892-kube-api-access-bnc5h\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960247 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-stats-auth\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960264 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvk4w\" (UniqueName: \"kubernetes.io/projected/67dd370b-7597-46e7-b520-8db5a21367b3-kube-api-access-gvk4w\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960289 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8p5w\" (UniqueName: \"kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960315 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvwh\" (UniqueName: \"kubernetes.io/projected/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-kube-api-access-8mvwh\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960343 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wgz\" (UniqueName: \"kubernetes.io/projected/8d25a252-92ee-4483-9266-cdee1f68a050-kube-api-access-d5wgz\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960392 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960412 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f672a8-75fd-4d4e-ada3-ba954aafde63-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960430 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960455 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960481 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbjk\" (UniqueName: \"kubernetes.io/projected/7fe05031-509d-4fc9-ba17-a503aed871e3-kube-api-access-ggbjk\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960564 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllpb\" (UniqueName: \"kubernetes.io/projected/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-kube-api-access-qllpb\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960584 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7lfq\" (UniqueName: \"kubernetes.io/projected/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-kube-api-access-m7lfq\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960602 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d25a252-92ee-4483-9266-cdee1f68a050-metrics-tls\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960621 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960639 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960657 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960674 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960702 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-srv-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960723 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960741 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fe05031-509d-4fc9-ba17-a503aed871e3-tmpfs\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntt4b\" (UniqueName: \"kubernetes.io/projected/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-kube-api-access-ntt4b\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960798 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960815 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-csi-data-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960834 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960849 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-auth-proxy-config\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960858 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48fde74a-b7fe-41bd-a125-471a4b5fe72b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960871 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960918 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfvh\" (UniqueName: \"kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960961 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.960999 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12ee21f9-5470-453b-b807-199ac9c87cd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961030 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961085 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961129 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961160 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql44r\" (UniqueName: \"kubernetes.io/projected/f978f1ad-1273-42b0-8527-10f691a14389-kube-api-access-ql44r\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961199 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961223 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-config\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961246 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969330b9-18aa-4a19-908c-f2acf32431cb-service-ca-bundle\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961303 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961327 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961378 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961414 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-service-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961443 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-default-certificate\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961467 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/85a52ad0-fc8e-4927-aef2-829f9450ccb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961493 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kgn\" (UniqueName: \"kubernetes.io/projected/0d761741-e933-4c06-8d40-436e683d2433-kube-api-access-w5kgn\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961519 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961543 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7f4\" (UniqueName: \"kubernetes.io/projected/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-kube-api-access-4t7f4\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961565 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-plugins-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-client\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961621 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-metrics-certs\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961644 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961670 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ee21f9-5470-453b-b807-199ac9c87cd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961678 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961695 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsqr\" (UniqueName: \"kubernetes.io/projected/85a52ad0-fc8e-4927-aef2-829f9450ccb3-kube-api-access-8qsqr\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961733 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.961759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.962030 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.962889 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.963047 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.963968 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.963997 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.964338 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/969330b9-18aa-4a19-908c-f2acf32431cb-service-ca-bundle\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.964774 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.965333 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.965741 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.965765 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.965931 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-config\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.966695 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.966730 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.968138 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.970783 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.970798 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.974857 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-stats-auth\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.974963 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.975791 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.976351 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.976419 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-metrics-certs\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.976707 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-client\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.977285 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f672a8-75fd-4d4e-ada3-ba954aafde63-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.977438 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.977876 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/969330b9-18aa-4a19-908c-f2acf32431cb-default-certificate\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.981441 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-serving-cert\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.982429 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.986620 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.988443 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.991571 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 12:07:47 crc kubenswrapper[4679]: I0203 12:07:47.993268 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.011896 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.031255 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.038278 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-etcd-service-ca\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.062813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.062997 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.562954772 +0000 UTC m=+141.037850870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063064 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063091 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063127 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/85a52ad0-fc8e-4927-aef2-829f9450ccb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063146 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kgn\" (UniqueName: \"kubernetes.io/projected/0d761741-e933-4c06-8d40-436e683d2433-kube-api-access-w5kgn\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063168 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7f4\" (UniqueName: \"kubernetes.io/projected/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-kube-api-access-4t7f4\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063187 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-plugins-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063214 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063245 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsqr\" (UniqueName: \"kubernetes.io/projected/85a52ad0-fc8e-4927-aef2-829f9450ccb3-kube-api-access-8qsqr\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063264 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063287 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063328 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063388 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88clj\" (UniqueName: \"kubernetes.io/projected/c45a77f4-45d7-4a67-90b0-086075deecbe-kube-api-access-88clj\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063426 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-socket-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063463 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrht\" (UniqueName: \"kubernetes.io/projected/6103f0a8-e747-4283-b983-58871373e22d-kube-api-access-wxrht\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063487 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-mountpoint-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063521 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhkb\" (UniqueName: \"kubernetes.io/projected/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-kube-api-access-skhkb\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063542 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063562 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-config\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063579 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-images\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063686 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063751 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063779 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hr2w\" (UniqueName: \"kubernetes.io/projected/a3359841-c69d-4d99-ad68-12c48aa5e044-kube-api-access-2hr2w\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063806 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063809 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-plugins-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063824 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063851 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-registration-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063879 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2khx4\" (UniqueName: \"kubernetes.io/projected/76e46c81-70dc-463c-b1f0-523885b31458-kube-api-access-2khx4\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063917 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063955 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063978 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064001 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f978f1ad-1273-42b0-8527-10f691a14389-proxy-tls\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064043 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ft6c\" (UniqueName: \"kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064034 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-socket-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064069 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.063883 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-mountpoint-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064531 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-registration-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.064538 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.564525046 +0000 UTC m=+141.039421134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064715 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064813 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064864 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/85a52ad0-fc8e-4927-aef2-829f9450ccb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064893 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.064961 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnc5h\" (UniqueName: \"kubernetes.io/projected/ff80d152-43ab-4161-bf9c-e2e9b6a91892-kube-api-access-bnc5h\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065007 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvk4w\" (UniqueName: \"kubernetes.io/projected/67dd370b-7597-46e7-b520-8db5a21367b3-kube-api-access-gvk4w\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065059 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5wgz\" (UniqueName: \"kubernetes.io/projected/8d25a252-92ee-4483-9266-cdee1f68a050-kube-api-access-d5wgz\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065129 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbjk\" (UniqueName: \"kubernetes.io/projected/7fe05031-509d-4fc9-ba17-a503aed871e3-kube-api-access-ggbjk\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065182 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qllpb\" (UniqueName: \"kubernetes.io/projected/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-kube-api-access-qllpb\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065214 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d25a252-92ee-4483-9266-cdee1f68a050-metrics-tls\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065241 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065282 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065379 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-srv-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065415 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065440 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fe05031-509d-4fc9-ba17-a503aed871e3-tmpfs\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065473 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntt4b\" (UniqueName: \"kubernetes.io/projected/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-kube-api-access-ntt4b\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065507 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065536 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-csi-data-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065571 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-auth-proxy-config\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065624 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfvh\" (UniqueName: \"kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065704 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065714 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c45a77f4-45d7-4a67-90b0-086075deecbe-csi-data-dir\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.065825 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.066043 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql44r\" (UniqueName: \"kubernetes.io/projected/f978f1ad-1273-42b0-8527-10f691a14389-kube-api-access-ql44r\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.066764 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fe05031-509d-4fc9-ba17-a503aed871e3-tmpfs\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.067121 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-auth-proxy-config\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.071057 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.082218 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.091273 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.111412 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.121301 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.131192 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.151842 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.166976 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.167153 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.667125017 +0000 UTC m=+141.142021115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.167268 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.168009 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.667988541 +0000 UTC m=+141.142884819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.172119 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.175877 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.191189 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.211344 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.221439 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.231998 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.250898 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.268984 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.269312 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.769274454 +0000 UTC m=+141.244170542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.270321 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.270761 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.770752096 +0000 UTC m=+141.245648184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.271085 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.276254 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.307113 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.325487 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jn8b\" (UniqueName: \"kubernetes.io/projected/f5830f6c-b0bf-454f-8726-8093c1b8c337-kube-api-access-2jn8b\") pod \"apiserver-76f77b778f-fpn7j\" (UID: \"f5830f6c-b0bf-454f-8726-8093c1b8c337\") " pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.347019 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64rc\" (UniqueName: \"kubernetes.io/projected/915772ff-e239-46f4-931b-420de4ee4012-kube-api-access-v64rc\") pod \"downloads-7954f5f757-hgzn9\" (UID: \"915772ff-e239-46f4-931b-420de4ee4012\") " pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.351008 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.370692 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.372340 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.372514 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.872487852 +0000 UTC m=+141.347383940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.372891 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.373333 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.873322475 +0000 UTC m=+141.348218563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.377939 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12ee21f9-5470-453b-b807-199ac9c87cd3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.392000 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.412001 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.416085 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12ee21f9-5470-453b-b807-199ac9c87cd3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.431686 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.451462 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.471567 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.475250 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.475492 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.975446012 +0000 UTC m=+141.450342130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.476069 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.476573 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:48.976554304 +0000 UTC m=+141.451450392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.481451 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.517134 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjv4\" (UniqueName: \"kubernetes.io/projected/0c6e390a-1a72-4a18-91a6-436752c1eb9a-kube-api-access-wxjv4\") pod \"apiserver-7bbb656c7d-htfp4\" (UID: \"0c6e390a-1a72-4a18-91a6-436752c1eb9a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.528089 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8l69\" (UniqueName: \"kubernetes.io/projected/75e7133e-70dc-4896-bac7-d159e39737c1-kube-api-access-m8l69\") pod \"machine-api-operator-5694c8668f-cshmm\" (UID: \"75e7133e-70dc-4896-bac7-d159e39737c1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.546811 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2mz\" (UniqueName: \"kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz\") pod \"controller-manager-879f6c89f-4cnwb\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.561215 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.567118 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ddl\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-kube-api-access-42ddl\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.578666 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.580487 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.080467151 +0000 UTC m=+141.555363239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.592477 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp6z7\" (UniqueName: \"kubernetes.io/projected/69f33a63-9f35-4b1a-aed2-067b1b909028-kube-api-access-jp6z7\") pod \"machine-approver-56656f9798-zcwc5\" (UID: \"69f33a63-9f35-4b1a-aed2-067b1b909028\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.615662 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ff564b9-b11a-4642-a931-fdb8e1c63872-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hj7nr\" (UID: \"9ff564b9-b11a-4642-a931-fdb8e1c63872\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.639134 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttplg\" (UniqueName: \"kubernetes.io/projected/05d40b59-1a0e-4684-a745-a0c1fb40245b-kube-api-access-ttplg\") pod \"openshift-apiserver-operator-796bbdcf4f-njnj9\" (UID: \"05d40b59-1a0e-4684-a745-a0c1fb40245b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.651703 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.655142 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6wr6\" (UniqueName: \"kubernetes.io/projected/f4002d3d-e043-4b02-960a-56c42232eaff-kube-api-access-s6wr6\") pod \"cluster-image-registry-operator-dc59b4c8b-77gv7\" (UID: \"f4002d3d-e043-4b02-960a-56c42232eaff\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.669199 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhn2\" (UniqueName: \"kubernetes.io/projected/94240535-d98a-4d60-8911-55e1b7cdc76c-kube-api-access-qrhn2\") pod \"console-operator-58897d9998-crc98\" (UID: \"94240535-d98a-4d60-8911-55e1b7cdc76c\") " pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.679572 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.681552 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.682243 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.182213078 +0000 UTC m=+141.657109366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.689862 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.698424 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.700386 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kgqb\" (UniqueName: \"kubernetes.io/projected/45463619-c5a3-479b-9253-d3745c0d20d3-kube-api-access-5kgqb\") pod \"authentication-operator-69f744f599-9lb6b\" (UID: \"45463619-c5a3-479b-9253-d3745c0d20d3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.705250 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.705580 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hgzn9"] Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.711951 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.712240 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghssg\" (UniqueName: \"kubernetes.io/projected/d3b48b2e-6257-4121-84b8-967ff424f8b0-kube-api-access-ghssg\") pod \"cluster-samples-operator-665b6dd947-pthkx\" (UID: \"d3b48b2e-6257-4121-84b8-967ff424f8b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.740609 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.752983 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48fde74a-b7fe-41bd-a125-471a4b5fe72b-proxy-tls\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.754173 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.764773 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f978f1ad-1273-42b0-8527-10f691a14389-images\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.773417 4679 request.go:700] Waited for 1.011405244s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.773723 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.774629 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.775428 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.777018 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.783527 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.783776 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.283746028 +0000 UTC m=+141.758642126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.784291 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.784759 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.284743826 +0000 UTC m=+141.759639904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.796159 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.807345 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fpn7j"] Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.809749 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f978f1ad-1273-42b0-8527-10f691a14389-proxy-tls\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.813897 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.825016 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.831736 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.851204 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.876641 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.880602 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.884975 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.886007 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.385973838 +0000 UTC m=+141.860869926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.894191 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.896435 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d25a252-92ee-4483-9266-cdee1f68a050-metrics-tls\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.915298 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.930319 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.939345 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cshmm"] Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.939848 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.951115 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hgzn9" event={"ID":"915772ff-e239-46f4-931b-420de4ee4012","Type":"ContainerStarted","Data":"d0e1a5b8fb03f6aaacfe9e5c5cc6b3ccec8115456e9d952c1a63953d27ff428a"} Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.954150 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.958195 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-config\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.962658 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" event={"ID":"69f33a63-9f35-4b1a-aed2-067b1b909028","Type":"ContainerStarted","Data":"e48bad0f2870b414e3d67d240ab68fe551cec8a1f308b636772d3f7d373a7d88"} Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.963902 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" event={"ID":"f5830f6c-b0bf-454f-8726-8093c1b8c337","Type":"ContainerStarted","Data":"4b292a614588d49a0824e750afed7ed06b8b63bf84b53c8b3296338abdb56238"} Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.971246 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 12:07:48 crc kubenswrapper[4679]: W0203 12:07:48.978588 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e7133e_70dc_4896_bac7_d159e39737c1.slice/crio-90e49670ae3ca3cc4d422f8080a218b1f4f80efbf0d1df2e9ea56e648e571ef5 WatchSource:0}: Error finding container 90e49670ae3ca3cc4d422f8080a218b1f4f80efbf0d1df2e9ea56e648e571ef5: Status 404 returned error can't find the container with id 90e49670ae3ca3cc4d422f8080a218b1f4f80efbf0d1df2e9ea56e648e571ef5 Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.988195 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:48 crc kubenswrapper[4679]: E0203 12:07:48.988636 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.48861363 +0000 UTC m=+141.963509918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:48 crc kubenswrapper[4679]: I0203 12:07:48.992653 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.015189 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.026174 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-srv-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.033593 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.056297 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.063807 4679 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064045 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls podName:0d761741-e933-4c06-8d40-436e683d2433 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.563901381 +0000 UTC m=+142.038797469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls") pod "dns-default-7hz4q" (UID: "0d761741-e933-4c06-8d40-436e683d2433") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064504 4679 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064650 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert podName:76e46c81-70dc-463c-b1f0-523885b31458 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.564619661 +0000 UTC m=+142.039515749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert") pod "olm-operator-6b444d44fb-4ksrp" (UID: "76e46c81-70dc-463c-b1f0-523885b31458") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064733 4679 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064771 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume podName:871a99a3-a5e1-4e7a-926d-5168fec4b91e nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.564759005 +0000 UTC m=+142.039655093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume") pod "collect-profiles-29502000-z4fmq" (UID: "871a99a3-a5e1-4e7a-926d-5168fec4b91e") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064799 4679 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.064834 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume podName:0d761741-e933-4c06-8d40-436e683d2433 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.564827837 +0000 UTC m=+142.039723915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume") pod "dns-default-7hz4q" (UID: "0d761741-e933-4c06-8d40-436e683d2433") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065207 4679 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065250 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert podName:f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.565242179 +0000 UTC m=+142.040138267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert") pod "ingress-canary-4nt88" (UID: "f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065296 4679 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065339 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs podName:5fb17ed3-7b60-4dd8-9d19-eb8781f88b86 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.565330191 +0000 UTC m=+142.040226279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs") pod "multus-admission-controller-857f4d67dd-kx774" (UID: "5fb17ed3-7b60-4dd8-9d19-eb8781f88b86") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065492 4679 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065567 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls podName:00b9ca4d-dce2-4baa-b9ce-0eda632507e7 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.565551147 +0000 UTC m=+142.040447235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-mlccb" (UID: "00b9ca4d-dce2-4baa-b9ce-0eda632507e7") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065593 4679 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065638 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert podName:6103f0a8-e747-4283-b983-58871373e22d nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.56562896 +0000 UTC m=+142.040525048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-92c68" (UID: "6103f0a8-e747-4283-b983-58871373e22d") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065713 4679 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065901 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca podName:3f09fa03-038d-4042-8f82-ca433431f66a nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.565756163 +0000 UTC m=+142.040652251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca") pod "marketplace-operator-79b997595-zqgbb" (UID: "3f09fa03-038d-4042-8f82-ca433431f66a") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.065948 4679 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066006 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert podName:85a52ad0-fc8e-4927-aef2-829f9450ccb3 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.56599028 +0000 UTC m=+142.040886368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert") pod "openshift-config-operator-7777fb866f-l9c9d" (UID: "85a52ad0-fc8e-4927-aef2-829f9450ccb3") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066041 4679 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066078 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert podName:7fe05031-509d-4fc9-ba17-a503aed871e3 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566071112 +0000 UTC m=+142.040967200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert") pod "packageserver-d55dfcdfc-fgmqt" (UID: "7fe05031-509d-4fc9-ba17-a503aed871e3") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066111 4679 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066148 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token podName:67dd370b-7597-46e7-b520-8db5a21367b3 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566140514 +0000 UTC m=+142.041036602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token") pod "machine-config-server-w9prt" (UID: "67dd370b-7597-46e7-b520-8db5a21367b3") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066205 4679 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066242 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert podName:b422f6d5-cfb0-468a-9078-f2d3bc877c3d nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566229907 +0000 UTC m=+142.041126005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert") pod "service-ca-operator-777779d784-95pqf" (UID: "b422f6d5-cfb0-468a-9078-f2d3bc877c3d") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066285 4679 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066319 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics podName:3f09fa03-038d-4042-8f82-ca433431f66a nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566308389 +0000 UTC m=+142.041204477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics") pod "marketplace-operator-79b997595-zqgbb" (UID: "3f09fa03-038d-4042-8f82-ca433431f66a") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066340 4679 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066394 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs podName:67dd370b-7597-46e7-b520-8db5a21367b3 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566384951 +0000 UTC m=+142.041281049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs") pod "machine-config-server-w9prt" (UID: "67dd370b-7597-46e7-b520-8db5a21367b3") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066549 4679 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066718 4679 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066640 4679 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066802 4679 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066821 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle podName:a3359841-c69d-4d99-ad68-12c48aa5e044 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566641668 +0000 UTC m=+142.041537766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle") pod "service-ca-9c57cc56f-bpsjq" (UID: "a3359841-c69d-4d99-ad68-12c48aa5e044") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066863 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert podName:7fe05031-509d-4fc9-ba17-a503aed871e3 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566847634 +0000 UTC m=+142.041743762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert") pod "packageserver-d55dfcdfc-fgmqt" (UID: "7fe05031-509d-4fc9-ba17-a503aed871e3") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066881 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config podName:b422f6d5-cfb0-468a-9078-f2d3bc877c3d nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566873005 +0000 UTC m=+142.041769093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config") pod "service-ca-operator-777779d784-95pqf" (UID: "b422f6d5-cfb0-468a-9078-f2d3bc877c3d") : failed to sync configmap cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.066981 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key podName:a3359841-c69d-4d99-ad68-12c48aa5e044 nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.566930576 +0000 UTC m=+142.041826664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key") pod "service-ca-9c57cc56f-bpsjq" (UID: "a3359841-c69d-4d99-ad68-12c48aa5e044") : failed to sync secret cache: timed out waiting for the condition Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.082680 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.084253 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.089959 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.090205 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.590154551 +0000 UTC m=+142.065050639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.091540 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.091818 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.092159 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.592131956 +0000 UTC m=+142.067028044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.102502 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff80d152-43ab-4161-bf9c-e2e9b6a91892-profile-collector-cert\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.103866 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.113802 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.134435 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.151789 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.171789 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.191831 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.192272 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.193014 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.692991208 +0000 UTC m=+142.167887296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.213061 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.213128 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-crc98"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.232960 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.247692 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.255409 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.272705 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.287900 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.291496 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.292534 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lb6b"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.293944 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.294785 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.794771385 +0000 UTC m=+142.269667473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.308899 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.311249 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 12:07:49 crc kubenswrapper[4679]: W0203 12:07:49.323675 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff564b9_b11a_4642_a931_fdb8e1c63872.slice/crio-acc50c18f4573b725d24c7f9e474fb88acbe3c789f69c7540ee2bb0c108e0cdf WatchSource:0}: Error finding container acc50c18f4573b725d24c7f9e474fb88acbe3c789f69c7540ee2bb0c108e0cdf: Status 404 returned error can't find the container with id acc50c18f4573b725d24c7f9e474fb88acbe3c789f69c7540ee2bb0c108e0cdf Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.331830 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.352238 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.363971 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.372677 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.372898 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.378473 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7"] Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.391237 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.395666 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.395860 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.895831442 +0000 UTC m=+142.370727540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.399860 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.400459 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:49.900436282 +0000 UTC m=+142.375332370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.411472 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.430644 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.451954 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.482297 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.492076 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.501004 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.501749 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.001709465 +0000 UTC m=+142.476605613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.512657 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.538983 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.552790 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.571968 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.591773 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.603950 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604007 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604033 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604063 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604106 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604157 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604179 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604211 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604237 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604281 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604323 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604442 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604474 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604496 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604530 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604578 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604632 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604687 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604722 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.604764 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.604801 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.104783709 +0000 UTC m=+142.579679797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.605884 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.606750 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-cabundle\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.607096 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.607192 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-config\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.612139 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-webhook-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.613443 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.613483 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-serving-cert\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.613772 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.614324 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fe05031-509d-4fc9-ba17-a503aed871e3-apiservice-cert\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.614444 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3359841-c69d-4d99-ad68-12c48aa5e044-signing-key\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.614878 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.615541 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.616863 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76e46c81-70dc-463c-b1f0-523885b31458-srv-cert\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.625314 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a52ad0-fc8e-4927-aef2-829f9450ccb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.632410 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6103f0a8-e747-4283-b983-58871373e22d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.632798 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.652393 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.667482 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-node-bootstrap-token\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.672839 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.687153 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/67dd370b-7597-46e7-b520-8db5a21367b3-certs\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.691909 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.706552 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.706754 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.206719031 +0000 UTC m=+142.681615119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.707679 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.708210 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.208201263 +0000 UTC m=+142.683097351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.713747 4679 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.731094 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.751351 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.772051 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.775924 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d761741-e933-4c06-8d40-436e683d2433-config-volume\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.789331 4679 request.go:700] Waited for 1.960177393s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.791713 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.802281 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d761741-e933-4c06-8d40-436e683d2433-metrics-tls\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.809778 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.809977 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.309946479 +0000 UTC m=+142.784842567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.810195 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.810941 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.310931417 +0000 UTC m=+142.785827505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.811370 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.831305 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.853881 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.883190 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.893165 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-cert\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.911210 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.911486 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.411455339 +0000 UTC m=+142.886351437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.911623 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: E0203 12:07:49.911990 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.411971334 +0000 UTC m=+142.886867492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.915439 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lt2\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.933923 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.956470 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgtm\" (UniqueName: \"kubernetes.io/projected/48fde74a-b7fe-41bd-a125-471a4b5fe72b-kube-api-access-vtgtm\") pod \"machine-config-controller-84d6567774-hdk9v\" (UID: \"48fde74a-b7fe-41bd-a125-471a4b5fe72b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.969646 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7f672a8-75fd-4d4e-ada3-ba954aafde63-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5h4sg\" (UID: \"b7f672a8-75fd-4d4e-ada3-ba954aafde63\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.971173 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" event={"ID":"f4002d3d-e043-4b02-960a-56c42232eaff","Type":"ContainerStarted","Data":"13a91e67c217dadfb62a711429a3fcc5c165452b6dbc7477e3279a83ff20967e"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.971236 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" event={"ID":"f4002d3d-e043-4b02-960a-56c42232eaff","Type":"ContainerStarted","Data":"d3378e8c911ced2cecf9ed67977f1f3f8259e892ac4f03ce0f4facf45e08da78"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.974424 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" event={"ID":"863f865e-918d-468a-ae6e-fcd314d7aa79","Type":"ContainerStarted","Data":"042fc952951bc003a2739536461f8a3a3fdd7107d3709462c2f25a32b33e7cbf"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.974493 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" event={"ID":"863f865e-918d-468a-ae6e-fcd314d7aa79","Type":"ContainerStarted","Data":"f8da6e4075b1c288c62b8d18eda64e634104cd40a1f60a9f9fe110ae6d544095"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.975882 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.978285 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" event={"ID":"69f33a63-9f35-4b1a-aed2-067b1b909028","Type":"ContainerStarted","Data":"f5b76e2a74ba991c2b620b15e3a37ca1b416b7242d6709102d5dd535294ba58c"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.978321 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" event={"ID":"69f33a63-9f35-4b1a-aed2-067b1b909028","Type":"ContainerStarted","Data":"f30bfd1412119f5066f94264183451c34083c24c2a25acb1bd47762aab9003dc"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.979344 4679 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4cnwb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.979453 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.980403 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" event={"ID":"9ff564b9-b11a-4642-a931-fdb8e1c63872","Type":"ContainerStarted","Data":"dc3642ecf652985884da16893c4f5813c7ccc5a4bb08ec4ae95491f39b6f1b2c"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.980483 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" event={"ID":"9ff564b9-b11a-4642-a931-fdb8e1c63872","Type":"ContainerStarted","Data":"c107710b5d7a8967199609dcfabe83712984347d0f6731d2fc23b7dfc23687f8"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.980501 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" event={"ID":"9ff564b9-b11a-4642-a931-fdb8e1c63872","Type":"ContainerStarted","Data":"acc50c18f4573b725d24c7f9e474fb88acbe3c789f69c7540ee2bb0c108e0cdf"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.982871 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" event={"ID":"75e7133e-70dc-4896-bac7-d159e39737c1","Type":"ContainerStarted","Data":"3b41deeb74905fa54c6974713a4e4a8263a4e8cf47b582572fdb06d4d9b2e7e3"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.982959 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" event={"ID":"75e7133e-70dc-4896-bac7-d159e39737c1","Type":"ContainerStarted","Data":"561ca60020308065fdc3968154097748d72283dabd8873b2dcba6565dedd613d"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.982981 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" event={"ID":"75e7133e-70dc-4896-bac7-d159e39737c1","Type":"ContainerStarted","Data":"90e49670ae3ca3cc4d422f8080a218b1f4f80efbf0d1df2e9ea56e648e571ef5"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.986577 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hgzn9" event={"ID":"915772ff-e239-46f4-931b-420de4ee4012","Type":"ContainerStarted","Data":"6b99ff03469d162b4a6787aed969bf0f7c74d777429013a67097d98f1c1843a1"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.986827 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.988666 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" event={"ID":"05d40b59-1a0e-4684-a745-a0c1fb40245b","Type":"ContainerStarted","Data":"4811b117c4ec0cc5b284d4c08bde3ae4db8a632199048e15659d06418544d859"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.988718 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" event={"ID":"05d40b59-1a0e-4684-a745-a0c1fb40245b","Type":"ContainerStarted","Data":"b3c33ff3a5b8f02389f089739ffb20ab16af8754b60e55bd53f89f16c178336b"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.990606 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.990652 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.991494 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-crc98" event={"ID":"94240535-d98a-4d60-8911-55e1b7cdc76c","Type":"ContainerStarted","Data":"ca611a45ef1a264fbdf31e14ce1d62f74aab40478b2fe40fa863e2f1b2f966f9"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.991544 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-crc98" event={"ID":"94240535-d98a-4d60-8911-55e1b7cdc76c","Type":"ContainerStarted","Data":"70ff987f549989fdaa1f79cc47c996c1606a7f4bc0a18b442a318401c8d2c335"} Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.992461 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.994772 4679 patch_prober.go:28] interesting pod/console-operator-58897d9998-crc98 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.994812 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-crc98" podUID="94240535-d98a-4d60-8911-55e1b7cdc76c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.996449 4679 generic.go:334] "Generic (PLEG): container finished" podID="f5830f6c-b0bf-454f-8726-8093c1b8c337" containerID="87723df10764091d22372cb9e3672496034bf01b92223a0c9f6ef52679431316" exitCode=0 Feb 03 12:07:49 crc kubenswrapper[4679]: I0203 12:07:49.996529 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" event={"ID":"f5830f6c-b0bf-454f-8726-8093c1b8c337","Type":"ContainerDied","Data":"87723df10764091d22372cb9e3672496034bf01b92223a0c9f6ef52679431316"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.001079 4679 generic.go:334] "Generic (PLEG): container finished" podID="0c6e390a-1a72-4a18-91a6-436752c1eb9a" containerID="f3cdbf7bd9f5a4120e80d6e548633374759a24480549f98f66686ba64cf74daa" exitCode=0 Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.001414 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" event={"ID":"0c6e390a-1a72-4a18-91a6-436752c1eb9a","Type":"ContainerDied","Data":"f3cdbf7bd9f5a4120e80d6e548633374759a24480549f98f66686ba64cf74daa"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.001526 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" event={"ID":"0c6e390a-1a72-4a18-91a6-436752c1eb9a","Type":"ContainerStarted","Data":"f496c551c89567e428c6008112b78ff01baa521ce5f655db16b447e8b5ef340e"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.005787 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtqvf\" (UniqueName: \"kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf\") pod \"console-f9d7485db-qlbms\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.007213 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnc8\" (UniqueName: \"kubernetes.io/projected/2f4716d6-7273-4424-ac3d-4ae01fb1b6bb-kube-api-access-tdnc8\") pod \"etcd-operator-b45778765-6qh27\" (UID: \"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.007884 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" event={"ID":"45463619-c5a3-479b-9253-d3745c0d20d3","Type":"ContainerStarted","Data":"5deb45e3ea610f3b51aeb1ec10abc73ef9e5c0df9ef03676fc971e6cb487b0a9"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.007938 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" event={"ID":"45463619-c5a3-479b-9253-d3745c0d20d3","Type":"ContainerStarted","Data":"0337b10a107eb61322d596f6ba901bb93fa4ce8156e0d4d8c6b72b3a642355e5"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.011875 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" event={"ID":"d3b48b2e-6257-4121-84b8-967ff424f8b0","Type":"ContainerStarted","Data":"46820e3d9c87cfd71aa210090a3b6ddce20336af5ae0621cbb637c422a183564"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.011912 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" event={"ID":"d3b48b2e-6257-4121-84b8-967ff424f8b0","Type":"ContainerStarted","Data":"d6799248930f54c2933c1d42e6c7bae152dea91fbebf86448b669839c2acf08f"} Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.012854 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.015472 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.515419548 +0000 UTC m=+142.990315766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.021646 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.029736 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.030852 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7lfq\" (UniqueName: \"kubernetes.io/projected/7e7515a6-b4e4-4e5d-a6ca-d9bc90695472-kube-api-access-m7lfq\") pod \"kube-storage-version-migrator-operator-b67b599dd-rhdjt\" (UID: \"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.038038 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.044908 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8p5w\" (UniqueName: \"kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w\") pod \"oauth-openshift-558db77b4-86rk7\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.075923 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.081055 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvwh\" (UniqueName: \"kubernetes.io/projected/58b8dcd1-b808-4d61-bc7f-cad15b0a5a43-kube-api-access-8mvwh\") pod \"openshift-controller-manager-operator-756b6f6bc6-5h5jh\" (UID: \"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.093805 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96g4\" (UniqueName: \"kubernetes.io/projected/073fba4d-77a8-4bb5-9bef-f3acd194e9ee-kube-api-access-g96g4\") pod \"migrator-59844c95c7-fsmcz\" (UID: \"073fba4d-77a8-4bb5-9bef-f3acd194e9ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.110398 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbpjj\" (UniqueName: \"kubernetes.io/projected/969330b9-18aa-4a19-908c-f2acf32431cb-kube-api-access-nbpjj\") pod \"router-default-5444994796-z8sz6\" (UID: \"969330b9-18aa-4a19-908c-f2acf32431cb\") " pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.115672 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.116143 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.616120965 +0000 UTC m=+143.091017053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.132299 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5wbs\" (UniqueName: \"kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs\") pod \"route-controller-manager-6576b87f9c-cb2wb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.149377 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12ee21f9-5470-453b-b807-199ac9c87cd3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zvpxd\" (UID: \"12ee21f9-5470-453b-b807-199ac9c87cd3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.216425 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7f4\" (UniqueName: \"kubernetes.io/projected/f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4-kube-api-access-4t7f4\") pod \"ingress-canary-4nt88\" (UID: \"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4\") " pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.217521 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.218902 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.71887229 +0000 UTC m=+143.193768378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.219032 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.219736 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.719708273 +0000 UTC m=+143.194604541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.244599 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsqr\" (UniqueName: \"kubernetes.io/projected/85a52ad0-fc8e-4927-aef2-829f9450ccb3-kube-api-access-8qsqr\") pod \"openshift-config-operator-7777fb866f-l9c9d\" (UID: \"85a52ad0-fc8e-4927-aef2-829f9450ccb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.248886 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4nt88" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.261945 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrht\" (UniqueName: \"kubernetes.io/projected/6103f0a8-e747-4283-b983-58871373e22d-kube-api-access-wxrht\") pod \"package-server-manager-789f6589d5-92c68\" (UID: \"6103f0a8-e747-4283-b983-58871373e22d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.276889 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kgn\" (UniqueName: \"kubernetes.io/projected/0d761741-e933-4c06-8d40-436e683d2433-kube-api-access-w5kgn\") pod \"dns-default-7hz4q\" (UID: \"0d761741-e933-4c06-8d40-436e683d2433\") " pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.289560 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.292016 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hr2w\" (UniqueName: \"kubernetes.io/projected/a3359841-c69d-4d99-ad68-12c48aa5e044-kube-api-access-2hr2w\") pod \"service-ca-9c57cc56f-bpsjq\" (UID: \"a3359841-c69d-4d99-ad68-12c48aa5e044\") " pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.298118 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.305087 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.312975 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.325002 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.325602 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.825575756 +0000 UTC m=+143.300471844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.331762 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhkb\" (UniqueName: \"kubernetes.io/projected/5fb17ed3-7b60-4dd8-9d19-eb8781f88b86-kube-api-access-skhkb\") pod \"multus-admission-controller-857f4d67dd-kx774\" (UID: \"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.335202 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88clj\" (UniqueName: \"kubernetes.io/projected/c45a77f4-45d7-4a67-90b0-086075deecbe-kube-api-access-88clj\") pod \"csi-hostpathplugin-6mlnk\" (UID: \"c45a77f4-45d7-4a67-90b0-086075deecbe\") " pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.352422 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ft6c\" (UniqueName: \"kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c\") pod \"marketplace-operator-79b997595-zqgbb\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.352809 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.359710 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.363075 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt"] Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.368839 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.378943 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2khx4\" (UniqueName: \"kubernetes.io/projected/76e46c81-70dc-463c-b1f0-523885b31458-kube-api-access-2khx4\") pod \"olm-operator-6b444d44fb-4ksrp\" (UID: \"76e46c81-70dc-463c-b1f0-523885b31458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.391977 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c180a7f-7e0e-4af7-b08e-462bd4c3973c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sq68x\" (UID: \"0c180a7f-7e0e-4af7-b08e-462bd4c3973c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.403744 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.419145 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg"] Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.425088 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" Feb 03 12:07:50 crc kubenswrapper[4679]: W0203 12:07:50.427679 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e7515a6_b4e4_4e5d_a6ca_d9bc90695472.slice/crio-54d33d05dbe402de198fd03bdb400abb90aa2dd619e64a28202f8d9816e8f5ee WatchSource:0}: Error finding container 54d33d05dbe402de198fd03bdb400abb90aa2dd619e64a28202f8d9816e8f5ee: Status 404 returned error can't find the container with id 54d33d05dbe402de198fd03bdb400abb90aa2dd619e64a28202f8d9816e8f5ee Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.428492 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.429002 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:50.928983509 +0000 UTC m=+143.403879597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.433093 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.440715 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvk4w\" (UniqueName: \"kubernetes.io/projected/67dd370b-7597-46e7-b520-8db5a21367b3-kube-api-access-gvk4w\") pod \"machine-config-server-w9prt\" (UID: \"67dd370b-7597-46e7-b520-8db5a21367b3\") " pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.449185 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnc5h\" (UniqueName: \"kubernetes.io/projected/ff80d152-43ab-4161-bf9c-e2e9b6a91892-kube-api-access-bnc5h\") pod \"catalog-operator-68c6474976-fvjgg\" (UID: \"ff80d152-43ab-4161-bf9c-e2e9b6a91892\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.453265 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5wgz\" (UniqueName: \"kubernetes.io/projected/8d25a252-92ee-4483-9266-cdee1f68a050-kube-api-access-d5wgz\") pod \"dns-operator-744455d44c-wlhws\" (UID: \"8d25a252-92ee-4483-9266-cdee1f68a050\") " pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.471284 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.480518 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.487399 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qllpb\" (UniqueName: \"kubernetes.io/projected/b422f6d5-cfb0-468a-9078-f2d3bc877c3d-kube-api-access-qllpb\") pod \"service-ca-operator-777779d784-95pqf\" (UID: \"b422f6d5-cfb0-468a-9078-f2d3bc877c3d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.487793 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.499055 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbjk\" (UniqueName: \"kubernetes.io/projected/7fe05031-509d-4fc9-ba17-a503aed871e3-kube-api-access-ggbjk\") pod \"packageserver-d55dfcdfc-fgmqt\" (UID: \"7fe05031-509d-4fc9-ba17-a503aed871e3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.505119 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.506176 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qh27"] Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.510563 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9prt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.518097 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntt4b\" (UniqueName: \"kubernetes.io/projected/00b9ca4d-dce2-4baa-b9ce-0eda632507e7-kube-api-access-ntt4b\") pod \"control-plane-machine-set-operator-78cbb6b69f-mlccb\" (UID: \"00b9ca4d-dce2-4baa-b9ce-0eda632507e7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:50 crc kubenswrapper[4679]: W0203 12:07:50.526287 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7f672a8_75fd_4d4e_ada3_ba954aafde63.slice/crio-e8bf7890859fda1c8069bb00a5042177ae6e8cba302ded3334e102de1aa02fe1 WatchSource:0}: Error finding container e8bf7890859fda1c8069bb00a5042177ae6e8cba302ded3334e102de1aa02fe1: Status 404 returned error can't find the container with id e8bf7890859fda1c8069bb00a5042177ae6e8cba302ded3334e102de1aa02fe1 Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.530486 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.531689 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.031663822 +0000 UTC m=+143.506559910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.531912 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.532110 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfvh\" (UniqueName: \"kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh\") pod \"collect-profiles-29502000-z4fmq\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.540420 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.555761 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v"] Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.559524 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql44r\" (UniqueName: \"kubernetes.io/projected/f978f1ad-1273-42b0-8527-10f691a14389-kube-api-access-ql44r\") pod \"machine-config-operator-74547568cd-89q8c\" (UID: \"f978f1ad-1273-42b0-8527-10f691a14389\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.636006 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.637776 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.137759901 +0000 UTC m=+143.612655989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: W0203 12:07:50.648230 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f4716d6_7273_4424_ac3d_4ae01fb1b6bb.slice/crio-2ec7170f7c9376d6336e4e9da59e430421903dc8ce7cfca27f7cbc8b627a95d2 WatchSource:0}: Error finding container 2ec7170f7c9376d6336e4e9da59e430421903dc8ce7cfca27f7cbc8b627a95d2: Status 404 returned error can't find the container with id 2ec7170f7c9376d6336e4e9da59e430421903dc8ce7cfca27f7cbc8b627a95d2 Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.678433 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4nt88"] Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.682412 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.695308 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.715534 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.739403 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.742441 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.2424151 +0000 UTC m=+143.717311198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.752091 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.769629 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.769925 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.803652 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.868891 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.869281 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.369264784 +0000 UTC m=+143.844160862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:50 crc kubenswrapper[4679]: W0203 12:07:50.920031 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4d36346_06a7_4dd0_a62e_d6bbd2c1dca4.slice/crio-cbd2c2df22ca87e5f8f668dc339dd1df1714e272f4386bcd3d5bc31cea42afb1 WatchSource:0}: Error finding container cbd2c2df22ca87e5f8f668dc339dd1df1714e272f4386bcd3d5bc31cea42afb1: Status 404 returned error can't find the container with id cbd2c2df22ca87e5f8f668dc339dd1df1714e272f4386bcd3d5bc31cea42afb1 Feb 03 12:07:50 crc kubenswrapper[4679]: I0203 12:07:50.971975 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:50 crc kubenswrapper[4679]: E0203 12:07:50.977555 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.477507313 +0000 UTC m=+143.952403411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.022527 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" event={"ID":"b7f672a8-75fd-4d4e-ada3-ba954aafde63","Type":"ContainerStarted","Data":"e8bf7890859fda1c8069bb00a5042177ae6e8cba302ded3334e102de1aa02fe1"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.039113 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" event={"ID":"0c6e390a-1a72-4a18-91a6-436752c1eb9a","Type":"ContainerStarted","Data":"e6a195cd4dfe2464fd01362030dd9f9775918d0b4ae6f9d339ab9e2498347533"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.043529 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" event={"ID":"d3b48b2e-6257-4121-84b8-967ff424f8b0","Type":"ContainerStarted","Data":"cff41ac4c7b82c2c4adaa940cc5b3ad502a48c3e242ed11c35f314e4b91d7426"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.046266 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" event={"ID":"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb","Type":"ContainerStarted","Data":"2ec7170f7c9376d6336e4e9da59e430421903dc8ce7cfca27f7cbc8b627a95d2"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.047628 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" event={"ID":"48fde74a-b7fe-41bd-a125-471a4b5fe72b","Type":"ContainerStarted","Data":"756f7cb48c6dd1f3871f3a236c087505f991074ea910ef140a164aea7de3014d"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.049049 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z8sz6" event={"ID":"969330b9-18aa-4a19-908c-f2acf32431cb","Type":"ContainerStarted","Data":"6835b0d51bb0e395930b6a31b9263ec7b646e939a28a02ed5aa54e786638b309"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.050531 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" event={"ID":"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472","Type":"ContainerStarted","Data":"54d33d05dbe402de198fd03bdb400abb90aa2dd619e64a28202f8d9816e8f5ee"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.052997 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" event={"ID":"f5830f6c-b0bf-454f-8726-8093c1b8c337","Type":"ContainerStarted","Data":"a38f4c3736b527ba37f54001767b2bbb695a76e06539cf8f1d26c5978ed48b83"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.054951 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4nt88" event={"ID":"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4","Type":"ContainerStarted","Data":"cbd2c2df22ca87e5f8f668dc339dd1df1714e272f4386bcd3d5bc31cea42afb1"} Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059266 4679 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4cnwb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059324 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059459 4679 patch_prober.go:28] interesting pod/console-operator-58897d9998-crc98 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059585 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-crc98" podUID="94240535-d98a-4d60-8911-55e1b7cdc76c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059742 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.059776 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.148081 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.148708 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.648687126 +0000 UTC m=+144.123583204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.249722 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.251291 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.751253775 +0000 UTC m=+144.226149863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.468664 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.469024 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:51.96900719 +0000 UTC m=+144.443903278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.560529 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.572630 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.573039 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.0730137 +0000 UTC m=+144.547909788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.578277 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd"] Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.607587 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" podStartSLOduration=122.607559244 podStartE2EDuration="2m2.607559244s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:51.585652206 +0000 UTC m=+144.060548304" watchObservedRunningTime="2026-02-03 12:07:51.607559244 +0000 UTC m=+144.082455332" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.610098 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-86rk7"] Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.674061 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.674633 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.174611813 +0000 UTC m=+144.649507901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.721385 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" podStartSLOduration=122.721342859 podStartE2EDuration="2m2.721342859s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:51.719472776 +0000 UTC m=+144.194368864" watchObservedRunningTime="2026-02-03 12:07:51.721342859 +0000 UTC m=+144.196238947" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.775338 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.775776 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.275752391 +0000 UTC m=+144.750648479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.779335 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x"] Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.876627 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.877225 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.377206489 +0000 UTC m=+144.852102577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.928846 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pthkx" podStartSLOduration=122.928821393 podStartE2EDuration="2m2.928821393s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:51.927413854 +0000 UTC m=+144.402309952" watchObservedRunningTime="2026-02-03 12:07:51.928821393 +0000 UTC m=+144.403717481" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.959601 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hgzn9" podStartSLOduration=122.95957242 podStartE2EDuration="2m2.95957242s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:51.951338748 +0000 UTC m=+144.426234856" watchObservedRunningTime="2026-02-03 12:07:51.95957242 +0000 UTC m=+144.434468518" Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.965725 4679 csr.go:261] certificate signing request csr-ktllm is approved, waiting to be issued Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.970094 4679 csr.go:257] certificate signing request csr-ktllm is issued Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.978334 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.979424 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.479389858 +0000 UTC m=+144.954285946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:51 crc kubenswrapper[4679]: I0203 12:07:51.979790 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:51 crc kubenswrapper[4679]: E0203 12:07:51.980210 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.480190181 +0000 UTC m=+144.955086269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.053939 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-njnj9" podStartSLOduration=124.053913998 podStartE2EDuration="2m4.053913998s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.049305748 +0000 UTC m=+144.524201846" watchObservedRunningTime="2026-02-03 12:07:52.053913998 +0000 UTC m=+144.528810106" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.088531 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.092600 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.592549536 +0000 UTC m=+145.067445624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.094443 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-77gv7" podStartSLOduration=123.094350187 podStartE2EDuration="2m3.094350187s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.07955594 +0000 UTC m=+144.554452038" watchObservedRunningTime="2026-02-03 12:07:52.094350187 +0000 UTC m=+144.569246275" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.144080 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" event={"ID":"b7f672a8-75fd-4d4e-ada3-ba954aafde63","Type":"ContainerStarted","Data":"0f530a4280f14fc53db711eac9f2d1ed106574a7552e8b19ba46a2943f5ac2d9"} Feb 03 12:07:52 crc kubenswrapper[4679]: W0203 12:07:52.144084 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c180a7f_7e0e_4af7_b08e_462bd4c3973c.slice/crio-a0b286bcb976a84449f8ce8358e205abbae5b22bfdd3e80e816b147da6b111ba WatchSource:0}: Error finding container a0b286bcb976a84449f8ce8358e205abbae5b22bfdd3e80e816b147da6b111ba: Status 404 returned error can't find the container with id a0b286bcb976a84449f8ce8358e205abbae5b22bfdd3e80e816b147da6b111ba Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.161878 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9prt" event={"ID":"67dd370b-7597-46e7-b520-8db5a21367b3","Type":"ContainerStarted","Data":"150d5ad01653fd1c7dd98d2ead26e0d2a90552cce2e29829764882e8bdd68fa9"} Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.169561 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" event={"ID":"7e7515a6-b4e4-4e5d-a6ca-d9bc90695472","Type":"ContainerStarted","Data":"57af47af269a317542e21d9c5fbb14a16f9ac060606aa1b54891eec457104cdd"} Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.171489 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qlbms" event={"ID":"42fe6faa-e19f-4b6d-acb9-df0ff4c35398","Type":"ContainerStarted","Data":"9b9348beb4767a0db3b7e3fb9831eae77faec103df4d21ae9aadc65727ff4044"} Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.190155 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.191480 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.191906 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.691890465 +0000 UTC m=+145.166786553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.220858 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hj7nr" podStartSLOduration=123.2208286 podStartE2EDuration="2m3.2208286s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.208342898 +0000 UTC m=+144.683239016" watchObservedRunningTime="2026-02-03 12:07:52.2208286 +0000 UTC m=+144.695724688" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.293077 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.295502 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.795471193 +0000 UTC m=+145.270367281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.395244 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.395715 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:52.895700047 +0000 UTC m=+145.370596135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.428195 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz"] Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.472463 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.499850 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.500293 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.000270213 +0000 UTC m=+145.475166301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.561521 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zcwc5" podStartSLOduration=124.561485227 podStartE2EDuration="2m4.561485227s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.50371237 +0000 UTC m=+144.978608468" watchObservedRunningTime="2026-02-03 12:07:52.561485227 +0000 UTC m=+145.036381315" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.571049 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh"] Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.573883 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lb6b" podStartSLOduration=124.573853736 podStartE2EDuration="2m4.573853736s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.553615936 +0000 UTC m=+145.028512034" watchObservedRunningTime="2026-02-03 12:07:52.573853736 +0000 UTC m=+145.048749824" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.610587 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.611111 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.111096175 +0000 UTC m=+145.585992253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.625851 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cshmm" podStartSLOduration=123.6258264 podStartE2EDuration="2m3.6258264s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.592175962 +0000 UTC m=+145.067072060" watchObservedRunningTime="2026-02-03 12:07:52.6258264 +0000 UTC m=+145.100722508" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.711326 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.711685 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.211667499 +0000 UTC m=+145.686563587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: W0203 12:07:52.818563 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d1a8cb_6d83_47ff_976b_f752f09a27bb.slice/crio-7523e70b9e5537ed9e6d54ffd3fde42265f7d46da6643f0f909e37e13e4c7fff WatchSource:0}: Error finding container 7523e70b9e5537ed9e6d54ffd3fde42265f7d46da6643f0f909e37e13e4c7fff: Status 404 returned error can't find the container with id 7523e70b9e5537ed9e6d54ffd3fde42265f7d46da6643f0f909e37e13e4c7fff Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.823433 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.823868 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.323854079 +0000 UTC m=+145.798750157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.894716 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-crc98" podStartSLOduration=123.894683875 podStartE2EDuration="2m3.894683875s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.893119071 +0000 UTC m=+145.368015159" watchObservedRunningTime="2026-02-03 12:07:52.894683875 +0000 UTC m=+145.369579963" Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.929990 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:52 crc kubenswrapper[4679]: E0203 12:07:52.930733 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.430698849 +0000 UTC m=+145.905594947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.984696 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 12:02:51 +0000 UTC, rotation deadline is 2026-12-26 00:50:19.328859105 +0000 UTC Feb 03 12:07:52 crc kubenswrapper[4679]: I0203 12:07:52.984770 4679 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7812h42m26.344091962s for next certificate rotation Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.002987 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bpsjq"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.008886 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kx774"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.009986 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rhdjt" podStartSLOduration=124.009964242 podStartE2EDuration="2m4.009964242s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:52.99249301 +0000 UTC m=+145.467389098" watchObservedRunningTime="2026-02-03 12:07:53.009964242 +0000 UTC m=+145.484860330" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.032249 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.032846 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.532828627 +0000 UTC m=+146.007724725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.035595 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5h4sg" podStartSLOduration=124.035574054 podStartE2EDuration="2m4.035574054s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.030871571 +0000 UTC m=+145.505767649" watchObservedRunningTime="2026-02-03 12:07:53.035574054 +0000 UTC m=+145.510470142" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.036754 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.102690 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-crc98" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.105080 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.114122 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.134057 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.134637 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.634609524 +0000 UTC m=+146.109505612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.192924 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.247548 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.248033 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.748019699 +0000 UTC m=+146.222915787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.250903 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7hz4q"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.276637 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mlnk"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.278370 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4nt88" event={"ID":"f4d36346-06a7-4dd0-a62e-d6bbd2c1dca4","Type":"ContainerStarted","Data":"fb5cf4235b521e7cabdac6d6c8fa114dcf01445721b94ab2f8ec8f21680e78f0"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.326456 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-95pqf"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.337868 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wlhws"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.348875 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.349617 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.84957772 +0000 UTC m=+146.324473808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.350818 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" event={"ID":"0c180a7f-7e0e-4af7-b08e-462bd4c3973c","Type":"ContainerStarted","Data":"a0b286bcb976a84449f8ce8358e205abbae5b22bfdd3e80e816b147da6b111ba"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.362525 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4nt88" podStartSLOduration=6.362499484 podStartE2EDuration="6.362499484s" podCreationTimestamp="2026-02-03 12:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.361183497 +0000 UTC m=+145.836079595" watchObservedRunningTime="2026-02-03 12:07:53.362499484 +0000 UTC m=+145.837395572" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.379962 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" event={"ID":"48fde74a-b7fe-41bd-a125-471a4b5fe72b","Type":"ContainerStarted","Data":"75acbd6d8768018fe4259f38863b13b220023b3573a02b2d638e6ddb29a6b1b6"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.434933 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" event={"ID":"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86","Type":"ContainerStarted","Data":"0b589efc314e3d054cce6ddade4ca494f0ce056df2f9c1224880905832a5031a"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.448144 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.448570 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.451842 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" event={"ID":"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9","Type":"ContainerStarted","Data":"7373e4bdbaff5f6d4a7cc843d8f2dd91413b6a1e3898035f3f906255c8db4a20"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.452898 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.453351 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:53.953333244 +0000 UTC m=+146.428229332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.466888 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.466940 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" event={"ID":"a3359841-c69d-4d99-ad68-12c48aa5e044","Type":"ContainerStarted","Data":"d3e77e7b1c3c6818f2762a3fbbc51cea774fa797b5a041345921397f3b3b80cf"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.482765 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.524886 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9prt" event={"ID":"67dd370b-7597-46e7-b520-8db5a21367b3","Type":"ContainerStarted","Data":"394d7255da83be2eb6e32b89ba6b54421150c1074c34a3834f277b7574b25464"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.536630 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt"] Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.552686 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qlbms" event={"ID":"42fe6faa-e19f-4b6d-acb9-df0ff4c35398","Type":"ContainerStarted","Data":"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.553820 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.554282 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.054258667 +0000 UTC m=+146.529154765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.571093 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" event={"ID":"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43","Type":"ContainerStarted","Data":"5eb5fb3acba97146a4b360bdb1af8f469ede3d3bcd74069b274a74d730ed1b8f"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.594733 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" event={"ID":"52d1a8cb-6d83-47ff-976b-f752f09a27bb","Type":"ContainerStarted","Data":"7523e70b9e5537ed9e6d54ffd3fde42265f7d46da6643f0f909e37e13e4c7fff"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.598543 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" event={"ID":"f5830f6c-b0bf-454f-8726-8093c1b8c337","Type":"ContainerStarted","Data":"c68820288ba285271d4b9c2faa4e06a7e0a6b7d83c245b13c7c8e2a2f6f727da"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.614485 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" event={"ID":"2f4716d6-7273-4424-ac3d-4ae01fb1b6bb","Type":"ContainerStarted","Data":"0401d7558ef5e530506dd58048f90a86a3618491d6abf5feef5e06def70eeb73"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.626550 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w9prt" podStartSLOduration=6.626524543 podStartE2EDuration="6.626524543s" podCreationTimestamp="2026-02-03 12:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.624881207 +0000 UTC m=+146.099777295" watchObservedRunningTime="2026-02-03 12:07:53.626524543 +0000 UTC m=+146.101420631" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.655345 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z8sz6" event={"ID":"969330b9-18aa-4a19-908c-f2acf32431cb","Type":"ContainerStarted","Data":"36fe6182315121c4803d0a08b1d475e9e9873e10e50669bcb804b8fa10ca07d6"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.663601 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.664148 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.164127872 +0000 UTC m=+146.639023960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: W0203 12:07:53.667151 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fe05031_509d_4fc9_ba17_a503aed871e3.slice/crio-1b91e39973a938e7da2dbcb34af2ac2c81aa53aba36f7a32fe211c49c78aec4e WatchSource:0}: Error finding container 1b91e39973a938e7da2dbcb34af2ac2c81aa53aba36f7a32fe211c49c78aec4e: Status 404 returned error can't find the container with id 1b91e39973a938e7da2dbcb34af2ac2c81aa53aba36f7a32fe211c49c78aec4e Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.684327 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" event={"ID":"6103f0a8-e747-4283-b983-58871373e22d","Type":"ContainerStarted","Data":"ad1984bed116ac5f269f238e6fb871f8ef1a7788035ba7d9234d84575e41cbf6"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.714530 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6qh27" podStartSLOduration=124.714426759 podStartE2EDuration="2m4.714426759s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.669110773 +0000 UTC m=+146.144006861" watchObservedRunningTime="2026-02-03 12:07:53.714426759 +0000 UTC m=+146.189322847" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.715938 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.715989 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" event={"ID":"12ee21f9-5470-453b-b807-199ac9c87cd3","Type":"ContainerStarted","Data":"6e669b6a9ad5e60c56e3cfb7c79a6fb101bc6ef0bee481f2e0780566d3390765"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.716016 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.777918 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" event={"ID":"073fba4d-77a8-4bb5-9bef-f3acd194e9ee","Type":"ContainerStarted","Data":"325f2a7e413bfd18b9f4a81e6997257a1239b8b909a16b145199bf5f5d26bcfd"} Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.777876 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qlbms" podStartSLOduration=124.777837946 podStartE2EDuration="2m4.777837946s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.777075784 +0000 UTC m=+146.251971882" watchObservedRunningTime="2026-02-03 12:07:53.777837946 +0000 UTC m=+146.252734034" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.779086 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.781091 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.281057327 +0000 UTC m=+146.755953595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.798676 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.881185 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.883937 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.383918635 +0000 UTC m=+146.858814723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.909088 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" podStartSLOduration=125.909062073 podStartE2EDuration="2m5.909062073s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.852648804 +0000 UTC m=+146.327544902" watchObservedRunningTime="2026-02-03 12:07:53.909062073 +0000 UTC m=+146.383958161" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.982801 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" podStartSLOduration=124.98277719 podStartE2EDuration="2m4.98277719s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:53.911042169 +0000 UTC m=+146.385938257" watchObservedRunningTime="2026-02-03 12:07:53.98277719 +0000 UTC m=+146.457673278" Feb 03 12:07:53 crc kubenswrapper[4679]: I0203 12:07:53.985518 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:53 crc kubenswrapper[4679]: E0203 12:07:53.985833 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.485797375 +0000 UTC m=+146.960693463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.045703 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-z8sz6" podStartSLOduration=125.045675632 podStartE2EDuration="2m5.045675632s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:54.043942363 +0000 UTC m=+146.518838461" watchObservedRunningTime="2026-02-03 12:07:54.045675632 +0000 UTC m=+146.520571720" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.088455 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.088988 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.588969361 +0000 UTC m=+147.063865449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.192054 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.192240 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.69220756 +0000 UTC m=+147.167103648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.192381 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.192695 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.692687733 +0000 UTC m=+147.167583821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.294322 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.295082 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.795059338 +0000 UTC m=+147.269955426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.313995 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.319478 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.319552 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.397433 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.397966 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.897945826 +0000 UTC m=+147.372841914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.498544 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.498806 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.998766267 +0000 UTC m=+147.473662355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.499242 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.499747 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:54.999711863 +0000 UTC m=+147.474608301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.603043 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.603572 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.103546619 +0000 UTC m=+147.578442717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.704955 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.705482 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.20546574 +0000 UTC m=+147.680361828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.768248 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7hz4q" event={"ID":"0d761741-e933-4c06-8d40-436e683d2433","Type":"ContainerStarted","Data":"5b19ec42b6ebaf69fd9389a791fc4633e8459b30a0456cc13aea74ed9ea5895c"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.769126 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" event={"ID":"76e46c81-70dc-463c-b1f0-523885b31458","Type":"ContainerStarted","Data":"e4d316a00f0b0e3e3d9c0ce9a18c57bce54abf3cc45d80eca72e15a39e6f11a1"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.770168 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" event={"ID":"0c180a7f-7e0e-4af7-b08e-462bd4c3973c","Type":"ContainerStarted","Data":"22d3d067992fce7e237e7f28422f2cd5700f47ed7fdb70baa6024279ad665765"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.771976 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" event={"ID":"f978f1ad-1273-42b0-8527-10f691a14389","Type":"ContainerStarted","Data":"da9043254e75ef2ec548bfaca1ffaacbc17269989b71f7a09ce9caed9aa0ab6a"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.773128 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" event={"ID":"7fe05031-509d-4fc9-ba17-a503aed871e3","Type":"ContainerStarted","Data":"1b91e39973a938e7da2dbcb34af2ac2c81aa53aba36f7a32fe211c49c78aec4e"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.773953 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" event={"ID":"c45a77f4-45d7-4a67-90b0-086075deecbe","Type":"ContainerStarted","Data":"15c6ebfaa310dc3f40841c3d9f2bf1401d5e2070ee05ed5b37d201bc725aeebf"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.775121 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" event={"ID":"58b8dcd1-b808-4d61-bc7f-cad15b0a5a43","Type":"ContainerStarted","Data":"bff52d7b7e75dfeb0f573be70751e54f6171a194c256055caaacfca289858630"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.776731 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" event={"ID":"073fba4d-77a8-4bb5-9bef-f3acd194e9ee","Type":"ContainerStarted","Data":"6ff2c60b842a7ad3cabf8295cf8da4f0dc5a1e56fbef68a670a6c49574c5e167"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.777464 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" event={"ID":"b422f6d5-cfb0-468a-9078-f2d3bc877c3d","Type":"ContainerStarted","Data":"888cbee6551d44658b041a2034b002d22758b87181d000fad189716b83e5d94a"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.780795 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" event={"ID":"a3359841-c69d-4d99-ad68-12c48aa5e044","Type":"ContainerStarted","Data":"d3afb6c99cebff78e4a2de031b50ae476279971f0ca8f6ce7111b1c6313c289e"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.784129 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" event={"ID":"00b9ca4d-dce2-4baa-b9ce-0eda632507e7","Type":"ContainerStarted","Data":"7cb7cd31ce0b00abd6e1e282be192fb1e6249c559f852ad2b7c95b2be0389d5f"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.785112 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" event={"ID":"8d25a252-92ee-4483-9266-cdee1f68a050","Type":"ContainerStarted","Data":"0608d652608b3c6c1456ac1a6ddad7a7b979dc348541f8b6f611e226baedd1ea"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.786274 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zvpxd" event={"ID":"12ee21f9-5470-453b-b807-199ac9c87cd3","Type":"ContainerStarted","Data":"e80bd91f4c7729428819d9af8dda291314b41467204352567d54a150aecd9751"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.788011 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" event={"ID":"871a99a3-a5e1-4e7a-926d-5168fec4b91e","Type":"ContainerStarted","Data":"648e743dc95e86f6e66cbb91554685eb6559c03ac6a2dbc842ad7d85971e346b"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.789220 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" event={"ID":"85a52ad0-fc8e-4927-aef2-829f9450ccb3","Type":"ContainerStarted","Data":"ad684f596ee2365eb8ed462d5f659ae2f296409abde0b92a7fd5b38d7a2bb4d9"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.789257 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" event={"ID":"85a52ad0-fc8e-4927-aef2-829f9450ccb3","Type":"ContainerStarted","Data":"abe928d11fcc2fc293f10523eb8089436d5e5f6f04bf08769abadbaadd8b104a"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.789946 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerStarted","Data":"006a5e1a4219cfa65ca0f3e96476476f55ace39cafb396d8b02580609c7b6684"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.791064 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" event={"ID":"52d1a8cb-6d83-47ff-976b-f752f09a27bb","Type":"ContainerStarted","Data":"760ac184df635e6bd9b3fd8395bcf0fc3d3c85838754b23dc2bcab27f4dea37c"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.792218 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.793501 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sq68x" podStartSLOduration=125.79348649 podStartE2EDuration="2m5.79348649s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:54.79277797 +0000 UTC m=+147.267674058" watchObservedRunningTime="2026-02-03 12:07:54.79348649 +0000 UTC m=+147.268382578" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.794502 4679 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cb2wb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.794569 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.795055 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" event={"ID":"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9","Type":"ContainerStarted","Data":"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.795842 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.796972 4679 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-86rk7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" start-of-body= Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.797024 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.797679 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" event={"ID":"6103f0a8-e747-4283-b983-58871373e22d","Type":"ContainerStarted","Data":"c1e40686f97a1b21df6a5b76cd0dc5473bbff0b369712e387bc28f35f7504c68"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.799586 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" event={"ID":"ff80d152-43ab-4161-bf9c-e2e9b6a91892","Type":"ContainerStarted","Data":"5c69711f4971d2fda3ce2f69123e8d0f1fdb0c8b4ee267e23f95c95286b41be0"} Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.806191 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.806397 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.306344702 +0000 UTC m=+147.781240790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.806683 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.807080 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.307072423 +0000 UTC m=+147.781968511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.812736 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-htfp4" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.820251 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5h5jh" podStartSLOduration=125.820230293 podStartE2EDuration="2m5.820230293s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:54.818503105 +0000 UTC m=+147.293399193" watchObservedRunningTime="2026-02-03 12:07:54.820230293 +0000 UTC m=+147.295126381" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.882268 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podStartSLOduration=125.882250761 podStartE2EDuration="2m5.882250761s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:54.85064912 +0000 UTC m=+147.325545208" watchObservedRunningTime="2026-02-03 12:07:54.882250761 +0000 UTC m=+147.357146849" Feb 03 12:07:54 crc kubenswrapper[4679]: I0203 12:07:54.907904 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:54 crc kubenswrapper[4679]: E0203 12:07:54.909258 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.409220721 +0000 UTC m=+147.884116809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.010962 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.011504 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.511486692 +0000 UTC m=+147.986382780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.112402 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.112751 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.612713904 +0000 UTC m=+148.087610012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.113285 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.113721 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.613705742 +0000 UTC m=+148.088601830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.214018 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.214586 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.714561093 +0000 UTC m=+148.189457181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.316039 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.316146 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.316220 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.316445 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.816430243 +0000 UTC m=+148.291326331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.417498 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.418069 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:55.918048025 +0000 UTC m=+148.392944113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.519750 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.520526 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.02046657 +0000 UTC m=+148.495362848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.621077 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.621674 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.121652511 +0000 UTC m=+148.596548599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.724063 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.724533 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.224515129 +0000 UTC m=+148.699411217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.805333 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" event={"ID":"7fe05031-509d-4fc9-ba17-a503aed871e3","Type":"ContainerStarted","Data":"38a397ae136750f948934de3f53c3018af80463a3598595bd4ab04c54f163812"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.806574 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.808048 4679 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgmqt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.808096 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" podUID="7fe05031-509d-4fc9-ba17-a503aed871e3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.809276 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" event={"ID":"ff80d152-43ab-4161-bf9c-e2e9b6a91892","Type":"ContainerStarted","Data":"5e6ee87e17c05bc09b8ae0b0ae58526305c41eb86e6402f5b198c48bdaabba1d"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.810714 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" event={"ID":"073fba4d-77a8-4bb5-9bef-f3acd194e9ee","Type":"ContainerStarted","Data":"8dd7d4124a13072c35f783621daabde5971b3571e33f692bbd2efce7d1caaa52"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.812620 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" event={"ID":"b422f6d5-cfb0-468a-9078-f2d3bc877c3d","Type":"ContainerStarted","Data":"38114f03ce69bef522359d8ee1ac4d4d274bf6977944c4222f430cd9d921e06f"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.817763 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" event={"ID":"00b9ca4d-dce2-4baa-b9ce-0eda632507e7","Type":"ContainerStarted","Data":"1b218d2662db3c8cc76f75ad06636ab0ded0861caf9379e38bd42c1617bd5261"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.819711 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" event={"ID":"f978f1ad-1273-42b0-8527-10f691a14389","Type":"ContainerStarted","Data":"c90a62e990ffda1aefb044f8b37c863bbe16cffae551ceb34046f81f07b6fdc5"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.821302 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7hz4q" event={"ID":"0d761741-e933-4c06-8d40-436e683d2433","Type":"ContainerStarted","Data":"c84515feef069686d37605001fabbd4fbc092db72f63cf4bc0d2a317a17bb577"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.825318 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.825925 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.325905406 +0000 UTC m=+148.800801494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.829040 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerStarted","Data":"800b4f1ba325b667e9e7137bfdc1924b5d076bda977e334dc7610fd86424d2be"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.830210 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.831427 4679 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zqgbb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.831490 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.833874 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" podStartSLOduration=127.83386364 podStartE2EDuration="2m7.83386364s" podCreationTimestamp="2026-02-03 12:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:54.941883931 +0000 UTC m=+147.416780029" watchObservedRunningTime="2026-02-03 12:07:55.83386364 +0000 UTC m=+148.308759728" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.834412 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" podStartSLOduration=126.834406735 podStartE2EDuration="2m6.834406735s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:55.832620465 +0000 UTC m=+148.307516563" watchObservedRunningTime="2026-02-03 12:07:55.834406735 +0000 UTC m=+148.309302823" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.837785 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" event={"ID":"8d25a252-92ee-4483-9266-cdee1f68a050","Type":"ContainerStarted","Data":"d8de3ff86fa2c5213feecd1448b496d1ca5df058092afbde2e8c23b6a6cda400"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.841444 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" event={"ID":"76e46c81-70dc-463c-b1f0-523885b31458","Type":"ContainerStarted","Data":"22ca302e5abe2ab64ce7d2402368989ac2ff95c91ade724c7c90e3ff438c02d5"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.843969 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" event={"ID":"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86","Type":"ContainerStarted","Data":"faf9f7ceb2ff3440d90ce1ba00249365af69ceadba7a8bbec2edf4316da99068"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.846071 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" event={"ID":"48fde74a-b7fe-41bd-a125-471a4b5fe72b","Type":"ContainerStarted","Data":"51097f48a6b9fc3d74d5faecb915c63bc0c1d4c1276309ba631a7dc921041ad8"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.847604 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" event={"ID":"6103f0a8-e747-4283-b983-58871373e22d","Type":"ContainerStarted","Data":"4aa0e803d3a3350879d01ec5f8047546529ac331b98d838864ed247994880277"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.849121 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" event={"ID":"871a99a3-a5e1-4e7a-926d-5168fec4b91e","Type":"ContainerStarted","Data":"72131e58eaab6c1360b9526fc2747bf4e5da6eb4c95ec8e104ab86b716e21853"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.859600 4679 generic.go:334] "Generic (PLEG): container finished" podID="85a52ad0-fc8e-4927-aef2-829f9450ccb3" containerID="ad684f596ee2365eb8ed462d5f659ae2f296409abde0b92a7fd5b38d7a2bb4d9" exitCode=0 Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.861231 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" event={"ID":"85a52ad0-fc8e-4927-aef2-829f9450ccb3","Type":"ContainerDied","Data":"ad684f596ee2365eb8ed462d5f659ae2f296409abde0b92a7fd5b38d7a2bb4d9"} Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.862350 4679 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cb2wb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.862418 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.863405 4679 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-86rk7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" start-of-body= Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.863439 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.911510 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fsmcz" podStartSLOduration=126.911480537 podStartE2EDuration="2m6.911480537s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:55.877131889 +0000 UTC m=+148.352027977" watchObservedRunningTime="2026-02-03 12:07:55.911480537 +0000 UTC m=+148.386376625" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.927597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:55 crc kubenswrapper[4679]: E0203 12:07:55.930270 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.430247325 +0000 UTC m=+148.905143403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.946639 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-95pqf" podStartSLOduration=126.946621297 podStartE2EDuration="2m6.946621297s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:55.945215467 +0000 UTC m=+148.420111555" watchObservedRunningTime="2026-02-03 12:07:55.946621297 +0000 UTC m=+148.421517385" Feb 03 12:07:55 crc kubenswrapper[4679]: I0203 12:07:55.946776 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" podStartSLOduration=126.946771021 podStartE2EDuration="2m6.946771021s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:55.913310498 +0000 UTC m=+148.388206586" watchObservedRunningTime="2026-02-03 12:07:55.946771021 +0000 UTC m=+148.421667109" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.029196 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.029726 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.529700427 +0000 UTC m=+149.004596515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.040992 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bpsjq" podStartSLOduration=127.040962544 podStartE2EDuration="2m7.040962544s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:55.980428729 +0000 UTC m=+148.455324837" watchObservedRunningTime="2026-02-03 12:07:56.040962544 +0000 UTC m=+148.515858632" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.082679 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" podStartSLOduration=127.082647759 podStartE2EDuration="2m7.082647759s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:56.040372608 +0000 UTC m=+148.515268696" watchObservedRunningTime="2026-02-03 12:07:56.082647759 +0000 UTC m=+148.557543857" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.133130 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.133670 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.633654586 +0000 UTC m=+149.108550674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234104 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.234504 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.734453806 +0000 UTC m=+149.209349894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234784 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234827 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234854 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234891 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.234931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.238414 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.738383086 +0000 UTC m=+149.213279184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.243914 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.250050 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.250584 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.260257 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.332823 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.333998 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:56 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:07:56 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:07:56 crc kubenswrapper[4679]: healthz check failed Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.334095 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.335654 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.336137 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.83611158 +0000 UTC m=+149.311007668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.337599 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.345469 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.437668 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.438179 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:56.938157725 +0000 UTC m=+149.413053813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.543664 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.544154 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.043897514 +0000 UTC m=+149.518793602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.544230 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.545110 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.045102788 +0000 UTC m=+149.519998866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.645621 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.646539 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.146517655 +0000 UTC m=+149.621413743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.748723 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.749106 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.249091994 +0000 UTC m=+149.723988082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: W0203 12:07:56.805386 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-dfc8bbd8875c6a065c21cbdfb56157d4d518f5db1114b0714f442876641c89fb WatchSource:0}: Error finding container dfc8bbd8875c6a065c21cbdfb56157d4d518f5db1114b0714f442876641c89fb: Status 404 returned error can't find the container with id dfc8bbd8875c6a065c21cbdfb56157d4d518f5db1114b0714f442876641c89fb Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.850073 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.850623 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.350600174 +0000 UTC m=+149.825496262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.883460 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dfc8bbd8875c6a065c21cbdfb56157d4d518f5db1114b0714f442876641c89fb"} Feb 03 12:07:56 crc kubenswrapper[4679]: W0203 12:07:56.886514 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-93d25bcdb053db9e095bb117eaa1cd79347d1da6551a1eef85aa400191d18c70 WatchSource:0}: Error finding container 93d25bcdb053db9e095bb117eaa1cd79347d1da6551a1eef85aa400191d18c70: Status 404 returned error can't find the container with id 93d25bcdb053db9e095bb117eaa1cd79347d1da6551a1eef85aa400191d18c70 Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.917098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" event={"ID":"f978f1ad-1273-42b0-8527-10f691a14389","Type":"ContainerStarted","Data":"be53651a0109e6850f546813baf9477ae979504bf09ea18ec09b342bee3d0c5b"} Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.928088 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" event={"ID":"85a52ad0-fc8e-4927-aef2-829f9450ccb3","Type":"ContainerStarted","Data":"113363a216ccd7f61f13231960c5e995ac6b156fc53d85db58295d1db21ad31b"} Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.930766 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.933464 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" event={"ID":"5fb17ed3-7b60-4dd8-9d19-eb8781f88b86","Type":"ContainerStarted","Data":"4eca762ced0cfb8af0a502a0ed3a66d4096874447af63b361ecaa4469dada0c6"} Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.935075 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.937532 4679 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zqgbb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.937594 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.938212 4679 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgmqt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.938244 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" podUID="7fe05031-509d-4fc9-ba17-a503aed871e3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.938319 4679 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cb2wb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.938338 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.946067 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-89q8c" podStartSLOduration=127.946039573 podStartE2EDuration="2m7.946039573s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:56.945424426 +0000 UTC m=+149.420320524" watchObservedRunningTime="2026-02-03 12:07:56.946039573 +0000 UTC m=+149.420935661" Feb 03 12:07:56 crc kubenswrapper[4679]: I0203 12:07:56.952789 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:56 crc kubenswrapper[4679]: E0203 12:07:56.953156 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.453140853 +0000 UTC m=+149.928036941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.008020 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hdk9v" podStartSLOduration=128.007988088 podStartE2EDuration="2m8.007988088s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.003498642 +0000 UTC m=+149.478394730" watchObservedRunningTime="2026-02-03 12:07:57.007988088 +0000 UTC m=+149.482884176" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.008182 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" podStartSLOduration=128.008176284 podStartE2EDuration="2m8.008176284s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:56.979856146 +0000 UTC m=+149.454752234" watchObservedRunningTime="2026-02-03 12:07:57.008176284 +0000 UTC m=+149.483072372" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.033395 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" podStartSLOduration=128.033348753 podStartE2EDuration="2m8.033348753s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.030914874 +0000 UTC m=+149.505810962" watchObservedRunningTime="2026-02-03 12:07:57.033348753 +0000 UTC m=+149.508244861" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.055768 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.056278 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.556256128 +0000 UTC m=+150.031152216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.056721 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.058321 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.558310886 +0000 UTC m=+150.033206974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.116443 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" podStartSLOduration=128.116416033 podStartE2EDuration="2m8.116416033s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.075443519 +0000 UTC m=+149.550339617" watchObservedRunningTime="2026-02-03 12:07:57.116416033 +0000 UTC m=+149.591312131" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.117405 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" podStartSLOduration=128.117395151 podStartE2EDuration="2m8.117395151s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.113540182 +0000 UTC m=+149.588436270" watchObservedRunningTime="2026-02-03 12:07:57.117395151 +0000 UTC m=+149.592291249" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.158593 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.158931 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.65891004 +0000 UTC m=+150.133806128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: W0203 12:07:57.160599 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-17b11ca92a4bfd2f681b8cc13aca50ad2d15b70ff314b121a5f16d06ceba1378 WatchSource:0}: Error finding container 17b11ca92a4bfd2f681b8cc13aca50ad2d15b70ff314b121a5f16d06ceba1378: Status 404 returned error can't find the container with id 17b11ca92a4bfd2f681b8cc13aca50ad2d15b70ff314b121a5f16d06ceba1378 Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.194538 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mlccb" podStartSLOduration=128.19434428900001 podStartE2EDuration="2m8.194344289s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.19403089 +0000 UTC m=+149.668926978" watchObservedRunningTime="2026-02-03 12:07:57.194344289 +0000 UTC m=+149.669240377" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.195033 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kx774" podStartSLOduration=128.195026158 podStartE2EDuration="2m8.195026158s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:57.141920562 +0000 UTC m=+149.616816660" watchObservedRunningTime="2026-02-03 12:07:57.195026158 +0000 UTC m=+149.669922246" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.261270 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.261778 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.761762238 +0000 UTC m=+150.236658326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.329507 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:57 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:07:57 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:07:57 crc kubenswrapper[4679]: healthz check failed Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.329583 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.362181 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.362752 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.862725772 +0000 UTC m=+150.337621860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.464383 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.464942 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:57.964920031 +0000 UTC m=+150.439816119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.565617 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.565878 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.065835364 +0000 UTC m=+150.540731452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.566180 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.566642 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.066625917 +0000 UTC m=+150.541522005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.629270 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.668199 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.668800 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.168775165 +0000 UTC m=+150.643671253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.770577 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.772285 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.27226798 +0000 UTC m=+150.747164068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.872580 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.872872 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.372829073 +0000 UTC m=+150.847725171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.873279 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.873814 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.373803891 +0000 UTC m=+150.848699989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.943008 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" event={"ID":"c45a77f4-45d7-4a67-90b0-086075deecbe","Type":"ContainerStarted","Data":"45a4875af96e9be3cb93e1ea6d7724ce15cf4fea93ac31ee5549b99fbe15edc9"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.944816 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3fef33c7d31da7b44ba7f6a6e47ef9b550a0cb85fbbf441d321a9fe0bd823e19"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.944854 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"17b11ca92a4bfd2f681b8cc13aca50ad2d15b70ff314b121a5f16d06ceba1378"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.947376 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2a400ed5e825793cf43b210a0c605739c558fff7031b879027b8de0e6397e557"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.948671 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"82ec0171416ebe6a99cd68a6f99ff4ef83b472e47bfdcb8e348ddf907a83864e"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.948710 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"93d25bcdb053db9e095bb117eaa1cd79347d1da6551a1eef85aa400191d18c70"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.949031 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.953853 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7hz4q" event={"ID":"0d761741-e933-4c06-8d40-436e683d2433","Type":"ContainerStarted","Data":"fb861bc021f189c491dca391622ad9accf11df53d56a02b0a4b9b5b63cdefe36"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.954209 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7hz4q" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.956671 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" event={"ID":"8d25a252-92ee-4483-9266-cdee1f68a050","Type":"ContainerStarted","Data":"1963208a1ffddcdc501b2a78a635d46eaa205f7c8a446d6d55b8aebf19850ecc"} Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.957569 4679 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zqgbb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.957641 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.975031 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.975251 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.475213068 +0000 UTC m=+150.950109166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:57 crc kubenswrapper[4679]: I0203 12:07:57.975387 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:57 crc kubenswrapper[4679]: E0203 12:07:57.975810 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.475799434 +0000 UTC m=+150.950695522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.036442 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wlhws" podStartSLOduration=129.036416652 podStartE2EDuration="2m9.036416652s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:58.035258779 +0000 UTC m=+150.510154877" watchObservedRunningTime="2026-02-03 12:07:58.036416652 +0000 UTC m=+150.511312740" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.077776 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.078106 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.578053205 +0000 UTC m=+151.052949303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.078969 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.079454 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.579423184 +0000 UTC m=+151.054319272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.179952 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.180188 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.680150641 +0000 UTC m=+151.155046729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.180263 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.180896 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.680870232 +0000 UTC m=+151.155766500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.271322 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7hz4q" podStartSLOduration=11.271299999 podStartE2EDuration="11.271299999s" podCreationTimestamp="2026-02-03 12:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:07:58.094227141 +0000 UTC m=+150.569123239" watchObservedRunningTime="2026-02-03 12:07:58.271299999 +0000 UTC m=+150.746196087" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.281293 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.281576 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.781529298 +0000 UTC m=+151.256425386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.281667 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.282121 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.782102534 +0000 UTC m=+151.256998622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.316791 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:58 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:07:58 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:07:58 crc kubenswrapper[4679]: healthz check failed Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.316874 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.382872 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.383099 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.883064818 +0000 UTC m=+151.357960906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.383298 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.383750 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.883729157 +0000 UTC m=+151.358625245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.483166 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.483253 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.483306 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.483477 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.484158 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.484324 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.98430537 +0000 UTC m=+151.459201448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.484540 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.484868 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:58.984860306 +0000 UTC m=+151.459756394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.562286 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.562346 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.571971 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.585686 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.585959 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.085918923 +0000 UTC m=+151.560815021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.586225 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.586690 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.086672935 +0000 UTC m=+151.561569023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.687525 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.687797 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.187752922 +0000 UTC m=+151.662649020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.688013 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.688719 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.188708049 +0000 UTC m=+151.663604327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.789944 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.790507 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.290479797 +0000 UTC m=+151.765375885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.891826 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.892245 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.392225853 +0000 UTC m=+151.867121941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.957662 4679 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-fgmqt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.958089 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" podUID="7fe05031-509d-4fc9-ba17-a503aed871e3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.967983 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fpn7j" Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.993486 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.993746 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.493709321 +0000 UTC m=+151.968605409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:58 crc kubenswrapper[4679]: I0203 12:07:58.993921 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:58 crc kubenswrapper[4679]: E0203 12:07:58.994517 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.494507573 +0000 UTC m=+151.969403651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.095760 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.096030 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.595982272 +0000 UTC m=+152.070878360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.098260 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.099896 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.599876162 +0000 UTC m=+152.074772250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.205448 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.205837 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.705818077 +0000 UTC m=+152.180714165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.306809 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.307218 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.807202673 +0000 UTC m=+152.282098761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.325870 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:07:59 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:07:59 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:07:59 crc kubenswrapper[4679]: healthz check failed Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.325974 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.407688 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.408301 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:07:59.908275411 +0000 UTC m=+152.383171499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.493107 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.493873 4679 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l9c9d container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.493935 4679 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l9c9d container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.493958 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" podUID="85a52ad0-fc8e-4927-aef2-829f9450ccb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.493982 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" podUID="85a52ad0-fc8e-4927-aef2-829f9450ccb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.494346 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.496972 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.497417 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.509201 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.509665 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.009650587 +0000 UTC m=+152.484546675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.517578 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.610922 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.611222 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.611323 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.11127698 +0000 UTC m=+152.586173068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.611524 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.611654 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.612130 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.112108683 +0000 UTC m=+152.587004771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.712616 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.712821 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.212763929 +0000 UTC m=+152.687660017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.712857 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.712915 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.712948 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.712992 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.713297 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.213282454 +0000 UTC m=+152.688178542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.767148 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.774057 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.775509 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.785799 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.806352 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.818249 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.827181 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.827773 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.327748428 +0000 UTC m=+152.802644516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.929466 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.929530 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzk4v\" (UniqueName: \"kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.929560 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.929607 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.930123 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.430101562 +0000 UTC m=+152.904997650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.971195 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.972272 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:07:59 crc kubenswrapper[4679]: W0203 12:07:59.979526 4679 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 03 12:07:59 crc kubenswrapper[4679]: E0203 12:07:59.979579 4679 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 12:07:59 crc kubenswrapper[4679]: I0203 12:07:59.984824 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" event={"ID":"c45a77f4-45d7-4a67-90b0-086075deecbe","Type":"ContainerStarted","Data":"0b53e9be4191937c54bc764591b560232e555433da6d178b5144dfa3ba4dd0ee"} Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.040762 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.041065 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.041160 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.041196 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzk4v\" (UniqueName: \"kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: E0203 12:08:00.041858 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.54183528 +0000 UTC m=+153.016731368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.042310 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.042564 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.044542 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.077428 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzk4v\" (UniqueName: \"kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v\") pod \"community-operators-h7wbv\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.096771 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.143512 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97xbr\" (UniqueName: \"kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.143907 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.143983 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.144020 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: E0203 12:08:00.148302 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.648279409 +0000 UTC m=+153.123175667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.176342 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.189131 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.261682 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.262315 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.262451 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xbr\" (UniqueName: \"kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.262523 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.263168 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: E0203 12:08:00.263299 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.763273918 +0000 UTC m=+153.238170016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.263601 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.266090 4679 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.294697 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xbr\" (UniqueName: \"kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr\") pod \"certified-operators-fr7jk\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.299207 4679 patch_prober.go:28] interesting pod/console-f9d7485db-qlbms container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.299343 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qlbms" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.310521 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.310577 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.310608 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.317294 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.325326 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.334792 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:00 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:08:00 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:08:00 crc kubenswrapper[4679]: healthz check failed Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.334886 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.369659 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.369728 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.369817 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmh4\" (UniqueName: \"kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.369846 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: E0203 12:08:00.370301 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.870284633 +0000 UTC m=+153.345180721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-klcrz" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.374171 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.380768 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.394099 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.397854 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.433902 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.449303 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.456302 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4ksrp" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482297 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482629 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjs8\" (UniqueName: \"kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482669 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmh4\" (UniqueName: \"kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482705 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482781 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482899 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.482989 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: E0203 12:08:00.484232 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 12:08:00.984198883 +0000 UTC m=+153.459094971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.485660 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.489489 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.489466 4679 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T12:08:00.266117459Z","Handler":null,"Name":""} Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.501195 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.517072 4679 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.517149 4679 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.529325 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmh4\" (UniqueName: \"kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4\") pod \"community-operators-tbtbf\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.585565 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjs8\" (UniqueName: \"kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.585657 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.585704 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.585763 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.587454 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.587720 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.624647 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjs8\" (UniqueName: \"kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8\") pod \"certified-operators-pbtrc\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.640994 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.660903 4679 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.661263 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.717948 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.731154 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fvjgg" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.811741 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-klcrz\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.859324 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-fgmqt" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.870513 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.896295 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.908705 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:08:00 crc kubenswrapper[4679]: I0203 12:08:00.996405 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerStarted","Data":"6f252034c2a5a20151ce87c47811221c86369b960ee9523a7396f5b03705d86d"} Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:00.999951 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0141f302-70c3-4151-9a55-3e472dc19198","Type":"ContainerStarted","Data":"40d815570bbf945fc8b5ab226f2c10c5e541765008302e7693badfa482e4d2a1"} Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.009553 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" event={"ID":"c45a77f4-45d7-4a67-90b0-086075deecbe","Type":"ContainerStarted","Data":"3c5771721c1325f7d45dc406f7c2026cc67d6b3a2c67d5680c031dee31c57600"} Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.066990 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.091123 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.297510 4679 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-fr7jk" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.298156 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.343013 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:01 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:08:01 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:08:01 crc kubenswrapper[4679]: healthz check failed Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.343119 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.469324 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.471036 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.565626 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.702697 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.703607 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.709865 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.710168 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.729042 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.748643 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.748761 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.807987 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.852287 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.852418 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.852514 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.883815 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.902423 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:01 crc kubenswrapper[4679]: W0203 12:08:01.906114 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8994b452_6ee1_4dac_8bcc_90fc7fd8e8d7.slice/crio-9f147267e728399991498b2ff320699efe3e1399c78f846afa985dff0043aad9 WatchSource:0}: Error finding container 9f147267e728399991498b2ff320699efe3e1399c78f846afa985dff0043aad9: Status 404 returned error can't find the container with id 9f147267e728399991498b2ff320699efe3e1399c78f846afa985dff0043aad9 Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.968913 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.970125 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.975106 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:08:01 crc kubenswrapper[4679]: I0203 12:08:01.986433 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.018867 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerStarted","Data":"bb78eb435dcf6f9be41ad74200a1c8325e003b0e1693d002742cab24c1fcd3e6"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.020888 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerStarted","Data":"9f147267e728399991498b2ff320699efe3e1399c78f846afa985dff0043aad9"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.022500 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" event={"ID":"053c55aa-a27c-4b37-9a5c-99925bd42082","Type":"ContainerStarted","Data":"98bee677d9a807a1a760e8682be83bd61dfee1ef98df00e6cab1f2523256762a"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.024177 4679 generic.go:334] "Generic (PLEG): container finished" podID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerID="a424b23d2959f5b112c5fb2f9bce223b83bacb7cab5413d13c11fbc4a84c876a" exitCode=0 Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.024234 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerDied","Data":"a424b23d2959f5b112c5fb2f9bce223b83bacb7cab5413d13c11fbc4a84c876a"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.024255 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerStarted","Data":"0149f5b7791b6c0a1b8fb656f10a56ee56132521c547f277a477747a6996749d"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.028169 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.028385 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0141f302-70c3-4151-9a55-3e472dc19198","Type":"ContainerStarted","Data":"cd03c4d7d41ff46487b40305d98b6332da58b408be723ee816e2ebd4d20163e1"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.032453 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" event={"ID":"c45a77f4-45d7-4a67-90b0-086075deecbe","Type":"ContainerStarted","Data":"7d7259f4fca982fd69f5c5eb040a5b8a6f76a813069af5d9f1ed03cbab7e8dc6"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.035743 4679 generic.go:334] "Generic (PLEG): container finished" podID="871a99a3-a5e1-4e7a-926d-5168fec4b91e" containerID="72131e58eaab6c1360b9526fc2747bf4e5da6eb4c95ec8e104ab86b716e21853" exitCode=0 Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.036017 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" event={"ID":"871a99a3-a5e1-4e7a-926d-5168fec4b91e","Type":"ContainerDied","Data":"72131e58eaab6c1360b9526fc2747bf4e5da6eb4c95ec8e104ab86b716e21853"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.040809 4679 generic.go:334] "Generic (PLEG): container finished" podID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerID="c4185b96ed73b8fc2e07401f3c14ce88a262812ddf4808fd3ef2a8379393c34f" exitCode=0 Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.040859 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerDied","Data":"c4185b96ed73b8fc2e07401f3c14ce88a262812ddf4808fd3ef2a8379393c34f"} Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.059166 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlhl\" (UniqueName: \"kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.059477 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.059653 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.082880 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.082845451 podStartE2EDuration="3.082845451s" podCreationTimestamp="2026-02-03 12:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:02.080850215 +0000 UTC m=+154.555746313" watchObservedRunningTime="2026-02-03 12:08:02.082845451 +0000 UTC m=+154.557741539" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.137586 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6mlnk" podStartSLOduration=15.137560303 podStartE2EDuration="15.137560303s" podCreationTimestamp="2026-02-03 12:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:02.116476789 +0000 UTC m=+154.591372877" watchObservedRunningTime="2026-02-03 12:08:02.137560303 +0000 UTC m=+154.612456391" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.160931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlhl\" (UniqueName: \"kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.161004 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.161047 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.161521 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.162879 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.187499 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlhl\" (UniqueName: \"kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl\") pod \"redhat-marketplace-4b5zq\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.191988 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.220528 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.321610 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:02 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:08:02 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:08:02 crc kubenswrapper[4679]: healthz check failed Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.321956 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.323736 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.370598 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.374839 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.383121 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.465622 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.465769 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.465797 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txb6\" (UniqueName: \"kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.489271 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.503037 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l9c9d" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.568811 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.568980 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.569027 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txb6\" (UniqueName: \"kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.572014 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.587671 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.642334 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txb6\" (UniqueName: \"kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6\") pod \"redhat-marketplace-7sc9p\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.724812 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.767762 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.975785 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.985972 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:02 crc kubenswrapper[4679]: I0203 12:08:02.989378 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.002613 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.061196 4679 generic.go:334] "Generic (PLEG): container finished" podID="0141f302-70c3-4151-9a55-3e472dc19198" containerID="cd03c4d7d41ff46487b40305d98b6332da58b408be723ee816e2ebd4d20163e1" exitCode=0 Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.061309 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0141f302-70c3-4151-9a55-3e472dc19198","Type":"ContainerDied","Data":"cd03c4d7d41ff46487b40305d98b6332da58b408be723ee816e2ebd4d20163e1"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.062729 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af725d40-5d23-4b1b-bb34-d46a1b50aa09","Type":"ContainerStarted","Data":"da8078cfdcef3f8b8d1a95f5315e2392bc473071c8e80ebf194422fb476e0f59"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.065342 4679 generic.go:334] "Generic (PLEG): container finished" podID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerID="edd4569fdf5ab00276b9f1450150fbf17cf689d3b4c4e787f785fef8ea804690" exitCode=0 Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.065414 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerDied","Data":"edd4569fdf5ab00276b9f1450150fbf17cf689d3b4c4e787f785fef8ea804690"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.072024 4679 generic.go:334] "Generic (PLEG): container finished" podID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerID="c2e2b32b807de50d16836d915f103fc49e5e9d0e365385c7355f527c1942bd75" exitCode=0 Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.072153 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerDied","Data":"c2e2b32b807de50d16836d915f103fc49e5e9d0e365385c7355f527c1942bd75"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.081265 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs62\" (UniqueName: \"kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.081440 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.081475 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.097405 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" event={"ID":"053c55aa-a27c-4b37-9a5c-99925bd42082","Type":"ContainerStarted","Data":"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.102175 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.129841 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerStarted","Data":"4875059d49a088457a815d9747481ec6eaaf6484e96250f0e7701504ee8b8362"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.129916 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerStarted","Data":"f1f3519ee3334e8190f66a56f2ac36e0705cb42a10c2695737ad9edb1bc5d0bb"} Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.186625 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbs62\" (UniqueName: \"kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.187169 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.187200 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.188215 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.188674 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.188905 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" podStartSLOduration=134.18887288 podStartE2EDuration="2m14.18887288s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:03.186116513 +0000 UTC m=+155.661012611" watchObservedRunningTime="2026-02-03 12:08:03.18887288 +0000 UTC m=+155.663768968" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.245349 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbs62\" (UniqueName: \"kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62\") pod \"redhat-operators-2sthk\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.332550 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.334604 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:03 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:08:03 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:08:03 crc kubenswrapper[4679]: healthz check failed Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.334684 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.387572 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.392532 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.412779 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.430773 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.499997 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfsn\" (UniqueName: \"kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.500060 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.500111 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.602028 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfsn\" (UniqueName: \"kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.603082 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.605616 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.605813 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.606416 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.647452 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfsn\" (UniqueName: \"kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn\") pod \"redhat-operators-g6ksm\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:03 crc kubenswrapper[4679]: I0203 12:08:03.765906 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.006634 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.014393 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.112776 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume\") pod \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.112882 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") pod \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.112945 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfvh\" (UniqueName: \"kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh\") pod \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\" (UID: \"871a99a3-a5e1-4e7a-926d-5168fec4b91e\") " Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.114208 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume" (OuterVolumeSpecName: "config-volume") pod "871a99a3-a5e1-4e7a-926d-5168fec4b91e" (UID: "871a99a3-a5e1-4e7a-926d-5168fec4b91e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.121378 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "871a99a3-a5e1-4e7a-926d-5168fec4b91e" (UID: "871a99a3-a5e1-4e7a-926d-5168fec4b91e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.123072 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh" (OuterVolumeSpecName: "kube-api-access-5qfvh") pod "871a99a3-a5e1-4e7a-926d-5168fec4b91e" (UID: "871a99a3-a5e1-4e7a-926d-5168fec4b91e"). InnerVolumeSpecName "kube-api-access-5qfvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.146410 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" event={"ID":"871a99a3-a5e1-4e7a-926d-5168fec4b91e","Type":"ContainerDied","Data":"648e743dc95e86f6e66cbb91554685eb6559c03ac6a2dbc842ad7d85971e346b"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.146496 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648e743dc95e86f6e66cbb91554685eb6559c03ac6a2dbc842ad7d85971e346b" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.146539 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.149879 4679 generic.go:334] "Generic (PLEG): container finished" podID="af725d40-5d23-4b1b-bb34-d46a1b50aa09" containerID="ebcc6f79ce704c172f21fa872e8dd441c3f98889a99b84331cd10acbac15a137" exitCode=0 Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.150669 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af725d40-5d23-4b1b-bb34-d46a1b50aa09","Type":"ContainerDied","Data":"ebcc6f79ce704c172f21fa872e8dd441c3f98889a99b84331cd10acbac15a137"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.157191 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.212461 4679 generic.go:334] "Generic (PLEG): container finished" podID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerID="c96b2396629ed45618e766d9115ee051ffbdaa9561ff59c3fc5ef571cda649c3" exitCode=0 Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.215636 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerDied","Data":"c96b2396629ed45618e766d9115ee051ffbdaa9561ff59c3fc5ef571cda649c3"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.215695 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerStarted","Data":"1173dc48525bcb8c2c0bef38376b95793c1ac84e1d1505028ea7267531ac69fe"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.234866 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfvh\" (UniqueName: \"kubernetes.io/projected/871a99a3-a5e1-4e7a-926d-5168fec4b91e-kube-api-access-5qfvh\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.238391 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871a99a3-a5e1-4e7a-926d-5168fec4b91e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.238566 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871a99a3-a5e1-4e7a-926d-5168fec4b91e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.254812 4679 generic.go:334] "Generic (PLEG): container finished" podID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerID="4875059d49a088457a815d9747481ec6eaaf6484e96250f0e7701504ee8b8362" exitCode=0 Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.285054 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerStarted","Data":"3b921e524a356a7c313ee6c34361b5f4f371bdbe00c7da09249f89147c8518bf"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.285576 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerDied","Data":"4875059d49a088457a815d9747481ec6eaaf6484e96250f0e7701504ee8b8362"} Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.323620 4679 patch_prober.go:28] interesting pod/router-default-5444994796-z8sz6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 12:08:04 crc kubenswrapper[4679]: [-]has-synced failed: reason withheld Feb 03 12:08:04 crc kubenswrapper[4679]: [+]process-running ok Feb 03 12:08:04 crc kubenswrapper[4679]: healthz check failed Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.324012 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8sz6" podUID="969330b9-18aa-4a19-908c-f2acf32431cb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.687198 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.879483 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access\") pod \"0141f302-70c3-4151-9a55-3e472dc19198\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.879701 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir\") pod \"0141f302-70c3-4151-9a55-3e472dc19198\" (UID: \"0141f302-70c3-4151-9a55-3e472dc19198\") " Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.880163 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0141f302-70c3-4151-9a55-3e472dc19198" (UID: "0141f302-70c3-4151-9a55-3e472dc19198"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.886649 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0141f302-70c3-4151-9a55-3e472dc19198" (UID: "0141f302-70c3-4151-9a55-3e472dc19198"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.982157 4679 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0141f302-70c3-4151-9a55-3e472dc19198-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:04 crc kubenswrapper[4679]: I0203 12:08:04.982211 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0141f302-70c3-4151-9a55-3e472dc19198-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.320113 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.324133 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-z8sz6" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.357379 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0141f302-70c3-4151-9a55-3e472dc19198","Type":"ContainerDied","Data":"40d815570bbf945fc8b5ab226f2c10c5e541765008302e7693badfa482e4d2a1"} Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.357431 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40d815570bbf945fc8b5ab226f2c10c5e541765008302e7693badfa482e4d2a1" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.357521 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.412312 4679 generic.go:334] "Generic (PLEG): container finished" podID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerID="e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a" exitCode=0 Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.412454 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerDied","Data":"e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a"} Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.412486 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerStarted","Data":"bd05af71053e390603b75958a641c97833637c0018654f16de5f56ab8e570b63"} Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.458322 4679 generic.go:334] "Generic (PLEG): container finished" podID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerID="86690aa89a90d68ad4487c9bfadafe445ec3b508006aadafb5b6182e6df5a49f" exitCode=0 Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.458775 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerDied","Data":"86690aa89a90d68ad4487c9bfadafe445ec3b508006aadafb5b6182e6df5a49f"} Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.554773 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7hz4q" Feb 03 12:08:05 crc kubenswrapper[4679]: I0203 12:08:05.879314 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.007129 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access\") pod \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.007201 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir\") pod \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\" (UID: \"af725d40-5d23-4b1b-bb34-d46a1b50aa09\") " Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.007778 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af725d40-5d23-4b1b-bb34-d46a1b50aa09" (UID: "af725d40-5d23-4b1b-bb34-d46a1b50aa09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.009621 4679 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.029721 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af725d40-5d23-4b1b-bb34-d46a1b50aa09" (UID: "af725d40-5d23-4b1b-bb34-d46a1b50aa09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.110852 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af725d40-5d23-4b1b-bb34-d46a1b50aa09-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.537516 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.537981 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af725d40-5d23-4b1b-bb34-d46a1b50aa09","Type":"ContainerDied","Data":"da8078cfdcef3f8b8d1a95f5315e2392bc473071c8e80ebf194422fb476e0f59"} Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.538008 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8078cfdcef3f8b8d1a95f5315e2392bc473071c8e80ebf194422fb476e0f59" Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.735818 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:08:06 crc kubenswrapper[4679]: I0203 12:08:06.735902 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:08:08 crc kubenswrapper[4679]: I0203 12:08:08.483457 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:08:08 crc kubenswrapper[4679]: I0203 12:08:08.483853 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:08:08 crc kubenswrapper[4679]: I0203 12:08:08.483491 4679 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgzn9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 03 12:08:08 crc kubenswrapper[4679]: I0203 12:08:08.483952 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgzn9" podUID="915772ff-e239-46f4-931b-420de4ee4012" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 03 12:08:10 crc kubenswrapper[4679]: I0203 12:08:10.299215 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:08:10 crc kubenswrapper[4679]: I0203 12:08:10.303996 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:08:12 crc kubenswrapper[4679]: I0203 12:08:12.126016 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:08:12 crc kubenswrapper[4679]: I0203 12:08:12.137550 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba5e4da3-455d-4394-824c-2dfe080bc2c5-metrics-certs\") pod \"network-metrics-daemon-j8bgc\" (UID: \"ba5e4da3-455d-4394-824c-2dfe080bc2c5\") " pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:08:12 crc kubenswrapper[4679]: I0203 12:08:12.252528 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j8bgc" Feb 03 12:08:12 crc kubenswrapper[4679]: I0203 12:08:12.810300 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j8bgc"] Feb 03 12:08:12 crc kubenswrapper[4679]: W0203 12:08:12.822448 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5e4da3_455d_4394_824c_2dfe080bc2c5.slice/crio-5cfef68787f1ec8de944cb11d6c3842ba0e17264979fba87900db559fa7dd647 WatchSource:0}: Error finding container 5cfef68787f1ec8de944cb11d6c3842ba0e17264979fba87900db559fa7dd647: Status 404 returned error can't find the container with id 5cfef68787f1ec8de944cb11d6c3842ba0e17264979fba87900db559fa7dd647 Feb 03 12:08:13 crc kubenswrapper[4679]: I0203 12:08:13.620511 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" event={"ID":"ba5e4da3-455d-4394-824c-2dfe080bc2c5","Type":"ContainerStarted","Data":"5cfef68787f1ec8de944cb11d6c3842ba0e17264979fba87900db559fa7dd647"} Feb 03 12:08:15 crc kubenswrapper[4679]: I0203 12:08:15.652998 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" event={"ID":"ba5e4da3-455d-4394-824c-2dfe080bc2c5","Type":"ContainerStarted","Data":"425046c5a8e57e113376463b47a37050f1a6869f79076f0a4ff51568d6e591f2"} Feb 03 12:08:18 crc kubenswrapper[4679]: I0203 12:08:18.490689 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hgzn9" Feb 03 12:08:19 crc kubenswrapper[4679]: I0203 12:08:19.736170 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:08:19 crc kubenswrapper[4679]: I0203 12:08:19.736542 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" containerID="cri-o://042fc952951bc003a2739536461f8a3a3fdd7107d3709462c2f25a32b33e7cbf" gracePeriod=30 Feb 03 12:08:19 crc kubenswrapper[4679]: I0203 12:08:19.775617 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:08:19 crc kubenswrapper[4679]: I0203 12:08:19.775940 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" containerID="cri-o://760ac184df635e6bd9b3fd8395bcf0fc3d3c85838754b23dc2bcab27f4dea37c" gracePeriod=30 Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.355205 4679 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cb2wb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.356061 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.697558 4679 generic.go:334] "Generic (PLEG): container finished" podID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerID="042fc952951bc003a2739536461f8a3a3fdd7107d3709462c2f25a32b33e7cbf" exitCode=0 Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.697711 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" event={"ID":"863f865e-918d-468a-ae6e-fcd314d7aa79","Type":"ContainerDied","Data":"042fc952951bc003a2739536461f8a3a3fdd7107d3709462c2f25a32b33e7cbf"} Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.701247 4679 generic.go:334] "Generic (PLEG): container finished" podID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerID="760ac184df635e6bd9b3fd8395bcf0fc3d3c85838754b23dc2bcab27f4dea37c" exitCode=0 Feb 03 12:08:20 crc kubenswrapper[4679]: I0203 12:08:20.701315 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" event={"ID":"52d1a8cb-6d83-47ff-976b-f752f09a27bb","Type":"ContainerDied","Data":"760ac184df635e6bd9b3fd8395bcf0fc3d3c85838754b23dc2bcab27f4dea37c"} Feb 03 12:08:21 crc kubenswrapper[4679]: I0203 12:08:21.075104 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.844857 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.856781 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.875930 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:27 crc kubenswrapper[4679]: E0203 12:08:27.876163 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876176 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: E0203 12:08:27.876197 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0141f302-70c3-4151-9a55-3e472dc19198" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876344 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0141f302-70c3-4151-9a55-3e472dc19198" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: E0203 12:08:27.876375 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876382 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: E0203 12:08:27.876394 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871a99a3-a5e1-4e7a-926d-5168fec4b91e" containerName="collect-profiles" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876402 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="871a99a3-a5e1-4e7a-926d-5168fec4b91e" containerName="collect-profiles" Feb 03 12:08:27 crc kubenswrapper[4679]: E0203 12:08:27.876411 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af725d40-5d23-4b1b-bb34-d46a1b50aa09" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876416 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="af725d40-5d23-4b1b-bb34-d46a1b50aa09" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876677 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0141f302-70c3-4151-9a55-3e472dc19198" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876715 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="af725d40-5d23-4b1b-bb34-d46a1b50aa09" containerName="pruner" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876732 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" containerName="controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876745 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="871a99a3-a5e1-4e7a-926d-5168fec4b91e" containerName="collect-profiles" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.876755 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" containerName="route-controller-manager" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.877416 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.889955 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.893594 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config\") pod \"863f865e-918d-468a-ae6e-fcd314d7aa79\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.893744 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles\") pod \"863f865e-918d-468a-ae6e-fcd314d7aa79\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.893796 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5wbs\" (UniqueName: \"kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs\") pod \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.893843 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert\") pod \"863f865e-918d-468a-ae6e-fcd314d7aa79\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.893906 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2mz\" (UniqueName: \"kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz\") pod \"863f865e-918d-468a-ae6e-fcd314d7aa79\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894011 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca\") pod \"863f865e-918d-468a-ae6e-fcd314d7aa79\" (UID: \"863f865e-918d-468a-ae6e-fcd314d7aa79\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894080 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca\") pod \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894149 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert\") pod \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894181 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config\") pod \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\" (UID: \"52d1a8cb-6d83-47ff-976b-f752f09a27bb\") " Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894510 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894555 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894581 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894626 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.894694 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9cc2\" (UniqueName: \"kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.896321 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "52d1a8cb-6d83-47ff-976b-f752f09a27bb" (UID: "52d1a8cb-6d83-47ff-976b-f752f09a27bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.896647 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "863f865e-918d-468a-ae6e-fcd314d7aa79" (UID: "863f865e-918d-468a-ae6e-fcd314d7aa79"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.896992 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config" (OuterVolumeSpecName: "config") pod "863f865e-918d-468a-ae6e-fcd314d7aa79" (UID: "863f865e-918d-468a-ae6e-fcd314d7aa79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.899442 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config" (OuterVolumeSpecName: "config") pod "52d1a8cb-6d83-47ff-976b-f752f09a27bb" (UID: "52d1a8cb-6d83-47ff-976b-f752f09a27bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.900995 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca" (OuterVolumeSpecName: "client-ca") pod "863f865e-918d-468a-ae6e-fcd314d7aa79" (UID: "863f865e-918d-468a-ae6e-fcd314d7aa79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.920949 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs" (OuterVolumeSpecName: "kube-api-access-p5wbs") pod "52d1a8cb-6d83-47ff-976b-f752f09a27bb" (UID: "52d1a8cb-6d83-47ff-976b-f752f09a27bb"). InnerVolumeSpecName "kube-api-access-p5wbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.921707 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz" (OuterVolumeSpecName: "kube-api-access-8z2mz") pod "863f865e-918d-468a-ae6e-fcd314d7aa79" (UID: "863f865e-918d-468a-ae6e-fcd314d7aa79"). InnerVolumeSpecName "kube-api-access-8z2mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.924197 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "863f865e-918d-468a-ae6e-fcd314d7aa79" (UID: "863f865e-918d-468a-ae6e-fcd314d7aa79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.927922 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "52d1a8cb-6d83-47ff-976b-f752f09a27bb" (UID: "52d1a8cb-6d83-47ff-976b-f752f09a27bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.999152 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.999314 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9cc2\" (UniqueName: \"kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.999427 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.999467 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:27 crc kubenswrapper[4679]: I0203 12:08:27.999493 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000544 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000630 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000648 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000663 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5wbs\" (UniqueName: \"kubernetes.io/projected/52d1a8cb-6d83-47ff-976b-f752f09a27bb-kube-api-access-p5wbs\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000675 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863f865e-918d-468a-ae6e-fcd314d7aa79-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000687 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z2mz\" (UniqueName: \"kubernetes.io/projected/863f865e-918d-468a-ae6e-fcd314d7aa79-kube-api-access-8z2mz\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000699 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863f865e-918d-468a-ae6e-fcd314d7aa79-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000712 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000724 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52d1a8cb-6d83-47ff-976b-f752f09a27bb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.000737 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d1a8cb-6d83-47ff-976b-f752f09a27bb-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.001147 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.002746 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.004968 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.018814 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9cc2\" (UniqueName: \"kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2\") pod \"controller-manager-7b77cd967f-fq8rj\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.250333 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.758967 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" event={"ID":"863f865e-918d-468a-ae6e-fcd314d7aa79","Type":"ContainerDied","Data":"f8da6e4075b1c288c62b8d18eda64e634104cd40a1f60a9f9fe110ae6d544095"} Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.759353 4679 scope.go:117] "RemoveContainer" containerID="042fc952951bc003a2739536461f8a3a3fdd7107d3709462c2f25a32b33e7cbf" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.759075 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4cnwb" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.761401 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" event={"ID":"52d1a8cb-6d83-47ff-976b-f752f09a27bb","Type":"ContainerDied","Data":"7523e70b9e5537ed9e6d54ffd3fde42265f7d46da6643f0f909e37e13e4c7fff"} Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.761524 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb" Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.783681 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.789006 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4cnwb"] Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.794150 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:08:28 crc kubenswrapper[4679]: I0203 12:08:28.797265 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cb2wb"] Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.219938 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d1a8cb-6d83-47ff-976b-f752f09a27bb" path="/var/lib/kubelet/pods/52d1a8cb-6d83-47ff-976b-f752f09a27bb/volumes" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.221328 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863f865e-918d-468a-ae6e-fcd314d7aa79" path="/var/lib/kubelet/pods/863f865e-918d-468a-ae6e-fcd314d7aa79/volumes" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.521272 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-92c68" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.733384 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.734199 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.736718 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.737215 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.738221 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.740442 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.739736 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.741596 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.752555 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.840556 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.840703 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tm8\" (UniqueName: \"kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.840893 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.840935 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.942791 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.942846 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.942910 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.942932 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tm8\" (UniqueName: \"kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.944172 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.945292 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.952608 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:30 crc kubenswrapper[4679]: I0203 12:08:30.962388 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tm8\" (UniqueName: \"kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8\") pod \"route-controller-manager-6df8d8445-bf5f2\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:31 crc kubenswrapper[4679]: I0203 12:08:31.061780 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.074687 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.076256 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwfsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g6ksm_openshift-marketplace(7a48df33-a76e-47c7-a418-d60f2b7f74de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.077860 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g6ksm" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.097480 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.097673 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmlhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4b5zq_openshift-marketplace(db9eac1b-370d-46dc-a81c-3f2e0befe712): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.098964 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4b5zq" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.099554 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.099635 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97xbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fr7jk_openshift-marketplace(6546cf97-de00-4569-9187-b3e4d69fe5d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:35 crc kubenswrapper[4679]: E0203 12:08:35.101815 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fr7jk" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" Feb 03 12:08:36 crc kubenswrapper[4679]: I0203 12:08:36.358515 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 12:08:36 crc kubenswrapper[4679]: I0203 12:08:36.735514 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:08:36 crc kubenswrapper[4679]: I0203 12:08:36.735939 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.781728 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g6ksm" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.781801 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4b5zq" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.781809 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fr7jk" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" Feb 03 12:08:36 crc kubenswrapper[4679]: I0203 12:08:36.823866 4679 scope.go:117] "RemoveContainer" containerID="760ac184df635e6bd9b3fd8395bcf0fc3d3c85838754b23dc2bcab27f4dea37c" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.916652 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.916830 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzk4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h7wbv_openshift-marketplace(9cb85479-bcf1-4106-9a2b-560b2f20571a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:36 crc kubenswrapper[4679]: E0203 12:08:36.917924 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h7wbv" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.006881 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.007075 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbs62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2sthk_openshift-marketplace(15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.008421 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2sthk" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.049512 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.063273 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.063586 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkmh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tbtbf_openshift-marketplace(ef00d1e7-934d-4a44-8301-d9a778fe78d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.064845 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tbtbf" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" Feb 03 12:08:37 crc kubenswrapper[4679]: W0203 12:08:37.092510 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b6c2686_6167_4918_a8eb_ec88ac48e2de.slice/crio-269deb8ab853e7c36cb0ef337d007a080ac5477937e12e83c34ecca9cd676f53 WatchSource:0}: Error finding container 269deb8ab853e7c36cb0ef337d007a080ac5477937e12e83c34ecca9cd676f53: Status 404 returned error can't find the container with id 269deb8ab853e7c36cb0ef337d007a080ac5477937e12e83c34ecca9cd676f53 Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.092659 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:37 crc kubenswrapper[4679]: W0203 12:08:37.133906 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54f8714_1127_497a_9c28_dcbab966f228.slice/crio-d184640f150413950abd96ab9fcf69b833ecd54e83eb0a37b0cbe74f10441be1 WatchSource:0}: Error finding container d184640f150413950abd96ab9fcf69b833ecd54e83eb0a37b0cbe74f10441be1: Status 404 returned error can't find the container with id d184640f150413950abd96ab9fcf69b833ecd54e83eb0a37b0cbe74f10441be1 Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.692452 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.693462 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.698267 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.699129 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.711799 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.741245 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.741347 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.820995 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j8bgc" event={"ID":"ba5e4da3-455d-4394-824c-2dfe080bc2c5","Type":"ContainerStarted","Data":"c62008a859189445f57a14b02954eca3dfe6105457bef6ca09473413fb266a0d"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.822812 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" event={"ID":"e54f8714-1127-497a-9c28-dcbab966f228","Type":"ContainerStarted","Data":"503c80d0229d0957d8fe3f836566a195dd9f43e869456ad4258926bd5b249a78"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.822873 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" event={"ID":"e54f8714-1127-497a-9c28-dcbab966f228","Type":"ContainerStarted","Data":"d184640f150413950abd96ab9fcf69b833ecd54e83eb0a37b0cbe74f10441be1"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.822898 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.828094 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" event={"ID":"7b6c2686-6167-4918-a8eb-ec88ac48e2de","Type":"ContainerStarted","Data":"93a00d423a3ad2ee8ad93b4f64e29a5dd934e5d48edf129132a1fa789173ce10"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.828156 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" event={"ID":"7b6c2686-6167-4918-a8eb-ec88ac48e2de","Type":"ContainerStarted","Data":"269deb8ab853e7c36cb0ef337d007a080ac5477937e12e83c34ecca9cd676f53"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.828399 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.834519 4679 generic.go:334] "Generic (PLEG): container finished" podID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerID="54c5d7c71e33b2bac7e58d3440fd0b433786b5d68d46637b87ec4fd41899e027" exitCode=0 Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.834667 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerDied","Data":"54c5d7c71e33b2bac7e58d3440fd0b433786b5d68d46637b87ec4fd41899e027"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.839910 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.839984 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.840691 4679 generic.go:334] "Generic (PLEG): container finished" podID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerID="8d176d87db760cd6e2ebb7cc790bb6bf1b578d2043118bfda9b5058e0d1ddb34" exitCode=0 Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.840818 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerDied","Data":"8d176d87db760cd6e2ebb7cc790bb6bf1b578d2043118bfda9b5058e0d1ddb34"} Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.842520 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.842667 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.843411 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.848241 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tbtbf" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.848242 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h7wbv" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" Feb 03 12:08:37 crc kubenswrapper[4679]: E0203 12:08:37.848388 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2sthk" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.858789 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j8bgc" podStartSLOduration=168.858764017 podStartE2EDuration="2m48.858764017s" podCreationTimestamp="2026-02-03 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:37.854514458 +0000 UTC m=+190.329410546" watchObservedRunningTime="2026-02-03 12:08:37.858764017 +0000 UTC m=+190.333660105" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.873507 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.885422 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" podStartSLOduration=18.885395648 podStartE2EDuration="18.885395648s" podCreationTimestamp="2026-02-03 12:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:37.884235195 +0000 UTC m=+190.359131293" watchObservedRunningTime="2026-02-03 12:08:37.885395648 +0000 UTC m=+190.360291746" Feb 03 12:08:37 crc kubenswrapper[4679]: I0203 12:08:37.925629 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" podStartSLOduration=18.925606511 podStartE2EDuration="18.925606511s" podCreationTimestamp="2026-02-03 12:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:37.923941054 +0000 UTC m=+190.398837152" watchObservedRunningTime="2026-02-03 12:08:37.925606511 +0000 UTC m=+190.400502609" Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.012581 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.229471 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.860843 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerStarted","Data":"9183d1fd8a6636819a596451ca20a60d09cd68ccf6cd2f4c080bdd4153af5e7f"} Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.866840 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerStarted","Data":"7749b30f66a36d3411b9d0630872ee4d69ab4d339313d77613b7aafdb97c2e78"} Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.869738 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f41f17f-6e87-43bb-8bcd-56dad8f2c617","Type":"ContainerStarted","Data":"34e36c4f543d05d09a71b542fe760ec1ccff58d9e46bf29bc6c44e200fe32b92"} Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.869768 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f41f17f-6e87-43bb-8bcd-56dad8f2c617","Type":"ContainerStarted","Data":"e8f97432e2a0473ec3e00f445bc19426ac962f4244d170c5b8565eeb1bfe3fcc"} Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.886921 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pbtrc" podStartSLOduration=3.631739857 podStartE2EDuration="38.886896032s" podCreationTimestamp="2026-02-03 12:08:00 +0000 UTC" firstStartedPulling="2026-02-03 12:08:03.074652822 +0000 UTC m=+155.549548920" lastFinishedPulling="2026-02-03 12:08:38.329808997 +0000 UTC m=+190.804705095" observedRunningTime="2026-02-03 12:08:38.88433912 +0000 UTC m=+191.359235208" watchObservedRunningTime="2026-02-03 12:08:38.886896032 +0000 UTC m=+191.361792120" Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.922922 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7sc9p" podStartSLOduration=2.877241046 podStartE2EDuration="36.922887096s" podCreationTimestamp="2026-02-03 12:08:02 +0000 UTC" firstStartedPulling="2026-02-03 12:08:04.262175598 +0000 UTC m=+156.737071686" lastFinishedPulling="2026-02-03 12:08:38.307821648 +0000 UTC m=+190.782717736" observedRunningTime="2026-02-03 12:08:38.907118422 +0000 UTC m=+191.382014520" watchObservedRunningTime="2026-02-03 12:08:38.922887096 +0000 UTC m=+191.397783184" Feb 03 12:08:38 crc kubenswrapper[4679]: I0203 12:08:38.930844 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.930806969 podStartE2EDuration="1.930806969s" podCreationTimestamp="2026-02-03 12:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:38.928995998 +0000 UTC m=+191.403892106" watchObservedRunningTime="2026-02-03 12:08:38.930806969 +0000 UTC m=+191.405703067" Feb 03 12:08:39 crc kubenswrapper[4679]: I0203 12:08:39.779280 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:39 crc kubenswrapper[4679]: I0203 12:08:39.874754 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:39 crc kubenswrapper[4679]: I0203 12:08:39.890626 4679 generic.go:334] "Generic (PLEG): container finished" podID="7f41f17f-6e87-43bb-8bcd-56dad8f2c617" containerID="34e36c4f543d05d09a71b542fe760ec1ccff58d9e46bf29bc6c44e200fe32b92" exitCode=0 Feb 03 12:08:39 crc kubenswrapper[4679]: I0203 12:08:39.892516 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f41f17f-6e87-43bb-8bcd-56dad8f2c617","Type":"ContainerDied","Data":"34e36c4f543d05d09a71b542fe760ec1ccff58d9e46bf29bc6c44e200fe32b92"} Feb 03 12:08:40 crc kubenswrapper[4679]: I0203 12:08:40.897727 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" podUID="e54f8714-1127-497a-9c28-dcbab966f228" containerName="controller-manager" containerID="cri-o://503c80d0229d0957d8fe3f836566a195dd9f43e869456ad4258926bd5b249a78" gracePeriod=30 Feb 03 12:08:40 crc kubenswrapper[4679]: I0203 12:08:40.898522 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerName="route-controller-manager" containerID="cri-o://93a00d423a3ad2ee8ad93b4f64e29a5dd934e5d48edf129132a1fa789173ce10" gracePeriod=30 Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.063639 4679 patch_prober.go:28] interesting pod/route-controller-manager-6df8d8445-bf5f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.064275 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.212159 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.297664 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir\") pod \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.297759 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access\") pod \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\" (UID: \"7f41f17f-6e87-43bb-8bcd-56dad8f2c617\") " Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.297852 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f41f17f-6e87-43bb-8bcd-56dad8f2c617" (UID: "7f41f17f-6e87-43bb-8bcd-56dad8f2c617"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.298096 4679 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.307185 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f41f17f-6e87-43bb-8bcd-56dad8f2c617" (UID: "7f41f17f-6e87-43bb-8bcd-56dad8f2c617"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.400170 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f41f17f-6e87-43bb-8bcd-56dad8f2c617-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.472042 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.472109 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.920843 4679 generic.go:334] "Generic (PLEG): container finished" podID="e54f8714-1127-497a-9c28-dcbab966f228" containerID="503c80d0229d0957d8fe3f836566a195dd9f43e869456ad4258926bd5b249a78" exitCode=0 Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.921212 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" event={"ID":"e54f8714-1127-497a-9c28-dcbab966f228","Type":"ContainerDied","Data":"503c80d0229d0957d8fe3f836566a195dd9f43e869456ad4258926bd5b249a78"} Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.923565 4679 generic.go:334] "Generic (PLEG): container finished" podID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerID="93a00d423a3ad2ee8ad93b4f64e29a5dd934e5d48edf129132a1fa789173ce10" exitCode=0 Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.923663 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" event={"ID":"7b6c2686-6167-4918-a8eb-ec88ac48e2de","Type":"ContainerDied","Data":"93a00d423a3ad2ee8ad93b4f64e29a5dd934e5d48edf129132a1fa789173ce10"} Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.926230 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f41f17f-6e87-43bb-8bcd-56dad8f2c617","Type":"ContainerDied","Data":"e8f97432e2a0473ec3e00f445bc19426ac962f4244d170c5b8565eeb1bfe3fcc"} Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.926266 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f97432e2a0473ec3e00f445bc19426ac962f4244d170c5b8565eeb1bfe3fcc" Feb 03 12:08:41 crc kubenswrapper[4679]: I0203 12:08:41.926448 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.104173 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.768977 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.769058 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.812938 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.907060 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.915791 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.937013 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.937001 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2" event={"ID":"7b6c2686-6167-4918-a8eb-ec88ac48e2de","Type":"ContainerDied","Data":"269deb8ab853e7c36cb0ef337d007a080ac5477937e12e83c34ecca9cd676f53"} Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.937130 4679 scope.go:117] "RemoveContainer" containerID="93a00d423a3ad2ee8ad93b4f64e29a5dd934e5d48edf129132a1fa789173ce10" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.940131 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" event={"ID":"e54f8714-1127-497a-9c28-dcbab966f228","Type":"ContainerDied","Data":"d184640f150413950abd96ab9fcf69b833ecd54e83eb0a37b0cbe74f10441be1"} Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.940182 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b77cd967f-fq8rj" Feb 03 12:08:42 crc kubenswrapper[4679]: I0203 12:08:42.959665 4679 scope.go:117] "RemoveContainer" containerID="503c80d0229d0957d8fe3f836566a195dd9f43e869456ad4258926bd5b249a78" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031284 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert\") pod \"e54f8714-1127-497a-9c28-dcbab966f228\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031387 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca\") pod \"e54f8714-1127-497a-9c28-dcbab966f228\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031424 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config\") pod \"e54f8714-1127-497a-9c28-dcbab966f228\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031528 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles\") pod \"e54f8714-1127-497a-9c28-dcbab966f228\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031585 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca\") pod \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031631 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert\") pod \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031674 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tm8\" (UniqueName: \"kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8\") pod \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031714 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9cc2\" (UniqueName: \"kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2\") pod \"e54f8714-1127-497a-9c28-dcbab966f228\" (UID: \"e54f8714-1127-497a-9c28-dcbab966f228\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.031753 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config\") pod \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\" (UID: \"7b6c2686-6167-4918-a8eb-ec88ac48e2de\") " Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.032475 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca" (OuterVolumeSpecName: "client-ca") pod "e54f8714-1127-497a-9c28-dcbab966f228" (UID: "e54f8714-1127-497a-9c28-dcbab966f228"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.032587 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e54f8714-1127-497a-9c28-dcbab966f228" (UID: "e54f8714-1127-497a-9c28-dcbab966f228"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.032713 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config" (OuterVolumeSpecName: "config") pod "e54f8714-1127-497a-9c28-dcbab966f228" (UID: "e54f8714-1127-497a-9c28-dcbab966f228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.032830 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b6c2686-6167-4918-a8eb-ec88ac48e2de" (UID: "7b6c2686-6167-4918-a8eb-ec88ac48e2de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.032971 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config" (OuterVolumeSpecName: "config") pod "7b6c2686-6167-4918-a8eb-ec88ac48e2de" (UID: "7b6c2686-6167-4918-a8eb-ec88ac48e2de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.037172 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2" (OuterVolumeSpecName: "kube-api-access-x9cc2") pod "e54f8714-1127-497a-9c28-dcbab966f228" (UID: "e54f8714-1127-497a-9c28-dcbab966f228"). InnerVolumeSpecName "kube-api-access-x9cc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.041554 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b6c2686-6167-4918-a8eb-ec88ac48e2de" (UID: "7b6c2686-6167-4918-a8eb-ec88ac48e2de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.041741 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e54f8714-1127-497a-9c28-dcbab966f228" (UID: "e54f8714-1127-497a-9c28-dcbab966f228"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.042392 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8" (OuterVolumeSpecName: "kube-api-access-k9tm8") pod "7b6c2686-6167-4918-a8eb-ec88ac48e2de" (UID: "7b6c2686-6167-4918-a8eb-ec88ac48e2de"). InnerVolumeSpecName "kube-api-access-k9tm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.133684 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6c2686-6167-4918-a8eb-ec88ac48e2de-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.133817 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tm8\" (UniqueName: \"kubernetes.io/projected/7b6c2686-6167-4918-a8eb-ec88ac48e2de-kube-api-access-k9tm8\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134140 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9cc2\" (UniqueName: \"kubernetes.io/projected/e54f8714-1127-497a-9c28-dcbab966f228-kube-api-access-x9cc2\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134159 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134168 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54f8714-1127-497a-9c28-dcbab966f228-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134177 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134188 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134196 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54f8714-1127-497a-9c28-dcbab966f228-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.134205 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6c2686-6167-4918-a8eb-ec88ac48e2de-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.269691 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.276200 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df8d8445-bf5f2"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.283767 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.290117 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b77cd967f-fq8rj"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.687753 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:08:43 crc kubenswrapper[4679]: E0203 12:08:43.688102 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f41f17f-6e87-43bb-8bcd-56dad8f2c617" containerName="pruner" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688119 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f41f17f-6e87-43bb-8bcd-56dad8f2c617" containerName="pruner" Feb 03 12:08:43 crc kubenswrapper[4679]: E0203 12:08:43.688132 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerName="route-controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688140 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerName="route-controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: E0203 12:08:43.688158 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54f8714-1127-497a-9c28-dcbab966f228" containerName="controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688166 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54f8714-1127-497a-9c28-dcbab966f228" containerName="controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688310 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" containerName="route-controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688330 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f41f17f-6e87-43bb-8bcd-56dad8f2c617" containerName="pruner" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.688338 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54f8714-1127-497a-9c28-dcbab966f228" containerName="controller-manager" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.691199 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.694731 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.695642 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.702958 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.744033 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.744132 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.744159 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.747438 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.748736 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.750189 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.751977 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.752271 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.752400 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.752849 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.752875 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.753006 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.759370 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.759489 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.760345 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.763278 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.763909 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.763938 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.763990 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.763993 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.764099 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.764286 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.845627 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.845687 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.845738 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.846444 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.846491 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.865327 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access\") pod \"installer-9-crc\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.946911 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqnj\" (UniqueName: \"kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.946971 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947003 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947027 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947056 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947088 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947110 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947156 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:43 crc kubenswrapper[4679]: I0203 12:08:43.947177 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqrt\" (UniqueName: \"kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.019154 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049339 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049425 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049463 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049504 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049528 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049584 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049615 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqrt\" (UniqueName: \"kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049649 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwqnj\" (UniqueName: \"kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.049679 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.050891 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.052840 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.053201 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.054267 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.054714 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.056977 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.060396 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.077020 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqrt\" (UniqueName: \"kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt\") pod \"route-controller-manager-788fcc4689-c4h7v\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.083425 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwqnj\" (UniqueName: \"kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj\") pod \"controller-manager-86d79b78bc-v469p\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.085755 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.108931 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.225890 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6c2686-6167-4918-a8eb-ec88ac48e2de" path="/var/lib/kubelet/pods/7b6c2686-6167-4918-a8eb-ec88ac48e2de/volumes" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.227017 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54f8714-1127-497a-9c28-dcbab966f228" path="/var/lib/kubelet/pods/e54f8714-1127-497a-9c28-dcbab966f228/volumes" Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.259962 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.376016 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:08:44 crc kubenswrapper[4679]: W0203 12:08:44.383618 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f52f639_8ab0_4e3f_bb45_781cfcc14179.slice/crio-d9f1c974cc674a11e698a0c9090d3065d5483b6267251b3ce1de429df56d52b8 WatchSource:0}: Error finding container d9f1c974cc674a11e698a0c9090d3065d5483b6267251b3ce1de429df56d52b8: Status 404 returned error can't find the container with id d9f1c974cc674a11e698a0c9090d3065d5483b6267251b3ce1de429df56d52b8 Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.548118 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:08:44 crc kubenswrapper[4679]: W0203 12:08:44.554173 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14f79770_b6b2_4498_9694_73f3225fcc75.slice/crio-74c546ed895966cb23c7905f9a7b42cb6590b6fae322caa57379a1ec834a6316 WatchSource:0}: Error finding container 74c546ed895966cb23c7905f9a7b42cb6590b6fae322caa57379a1ec834a6316: Status 404 returned error can't find the container with id 74c546ed895966cb23c7905f9a7b42cb6590b6fae322caa57379a1ec834a6316 Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.957606 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" event={"ID":"1f52f639-8ab0-4e3f-bb45-781cfcc14179","Type":"ContainerStarted","Data":"d9f1c974cc674a11e698a0c9090d3065d5483b6267251b3ce1de429df56d52b8"} Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.962606 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" event={"ID":"14f79770-b6b2-4498-9694-73f3225fcc75","Type":"ContainerStarted","Data":"74c546ed895966cb23c7905f9a7b42cb6590b6fae322caa57379a1ec834a6316"} Feb 03 12:08:44 crc kubenswrapper[4679]: I0203 12:08:44.964409 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"46d1120c-bf71-4af7-a6c9-7155f6b4404f","Type":"ContainerStarted","Data":"fcc0d5e1d26f1bcb25f4aa79987f7f942aad60271396691bcb419412d537a354"} Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.975037 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"46d1120c-bf71-4af7-a6c9-7155f6b4404f","Type":"ContainerStarted","Data":"d468bcbbf41be94a3fcc828105cd2b927838c5f3cc0b29bd26ea92620c566806"} Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.979551 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" event={"ID":"1f52f639-8ab0-4e3f-bb45-781cfcc14179","Type":"ContainerStarted","Data":"4d8d8f68bf4345e470c3c64a663b40aeacf6bee61420d37046c0403490b42bf8"} Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.979738 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.981572 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" event={"ID":"14f79770-b6b2-4498-9694-73f3225fcc75","Type":"ContainerStarted","Data":"773f14754ffb456e85e3ae679f32359c307a5f5e8faf2d4b0ef69304c43163e1"} Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.982166 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.989678 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.989982 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:08:45 crc kubenswrapper[4679]: I0203 12:08:45.995133 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.995113966 podStartE2EDuration="2.995113966s" podCreationTimestamp="2026-02-03 12:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:45.99281489 +0000 UTC m=+198.467710988" watchObservedRunningTime="2026-02-03 12:08:45.995113966 +0000 UTC m=+198.470010054" Feb 03 12:08:46 crc kubenswrapper[4679]: I0203 12:08:46.055028 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" podStartSLOduration=7.05500177 podStartE2EDuration="7.05500177s" podCreationTimestamp="2026-02-03 12:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:46.031916863 +0000 UTC m=+198.506812951" watchObservedRunningTime="2026-02-03 12:08:46.05500177 +0000 UTC m=+198.529897858" Feb 03 12:08:46 crc kubenswrapper[4679]: I0203 12:08:46.056822 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" podStartSLOduration=7.056814392 podStartE2EDuration="7.056814392s" podCreationTimestamp="2026-02-03 12:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:08:46.052230381 +0000 UTC m=+198.527126479" watchObservedRunningTime="2026-02-03 12:08:46.056814392 +0000 UTC m=+198.531710480" Feb 03 12:08:51 crc kubenswrapper[4679]: I0203 12:08:51.520374 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:52 crc kubenswrapper[4679]: I0203 12:08:52.814204 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:53 crc kubenswrapper[4679]: I0203 12:08:53.848121 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:53 crc kubenswrapper[4679]: I0203 12:08:53.848413 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pbtrc" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="registry-server" containerID="cri-o://9183d1fd8a6636819a596451ca20a60d09cd68ccf6cd2f4c080bdd4153af5e7f" gracePeriod=2 Feb 03 12:08:54 crc kubenswrapper[4679]: I0203 12:08:54.034808 4679 generic.go:334] "Generic (PLEG): container finished" podID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerID="9183d1fd8a6636819a596451ca20a60d09cd68ccf6cd2f4c080bdd4153af5e7f" exitCode=0 Feb 03 12:08:54 crc kubenswrapper[4679]: I0203 12:08:54.034875 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerDied","Data":"9183d1fd8a6636819a596451ca20a60d09cd68ccf6cd2f4c080bdd4153af5e7f"} Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.250500 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.251356 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7sc9p" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="registry-server" containerID="cri-o://7749b30f66a36d3411b9d0630872ee4d69ab4d339313d77613b7aafdb97c2e78" gracePeriod=2 Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.713135 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.837277 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities\") pod \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.837444 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content\") pod \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.838466 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities" (OuterVolumeSpecName: "utilities") pod "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" (UID: "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.838792 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfjs8\" (UniqueName: \"kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8\") pod \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\" (UID: \"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7\") " Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.839558 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.845267 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8" (OuterVolumeSpecName: "kube-api-access-dfjs8") pod "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" (UID: "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7"). InnerVolumeSpecName "kube-api-access-dfjs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.908906 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" (UID: "8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.941083 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:55 crc kubenswrapper[4679]: I0203 12:08:55.941145 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfjs8\" (UniqueName: \"kubernetes.io/projected/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7-kube-api-access-dfjs8\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.059268 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbtrc" event={"ID":"8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7","Type":"ContainerDied","Data":"9f147267e728399991498b2ff320699efe3e1399c78f846afa985dff0043aad9"} Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.059337 4679 scope.go:117] "RemoveContainer" containerID="9183d1fd8a6636819a596451ca20a60d09cd68ccf6cd2f4c080bdd4153af5e7f" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.059599 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbtrc" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.066841 4679 generic.go:334] "Generic (PLEG): container finished" podID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerID="7749b30f66a36d3411b9d0630872ee4d69ab4d339313d77613b7aafdb97c2e78" exitCode=0 Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.066898 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerDied","Data":"7749b30f66a36d3411b9d0630872ee4d69ab4d339313d77613b7aafdb97c2e78"} Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.104689 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.115769 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pbtrc"] Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.188167 4679 scope.go:117] "RemoveContainer" containerID="8d176d87db760cd6e2ebb7cc790bb6bf1b578d2043118bfda9b5058e0d1ddb34" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.222463 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" path="/var/lib/kubelet/pods/8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7/volumes" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.270513 4679 scope.go:117] "RemoveContainer" containerID="c2e2b32b807de50d16836d915f103fc49e5e9d0e365385c7355f527c1942bd75" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.275937 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.348052 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content\") pod \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.348102 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txb6\" (UniqueName: \"kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6\") pod \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.348172 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities\") pod \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\" (UID: \"34cbcbd4-c586-4962-8fe8-c5ccdd822da4\") " Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.349142 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities" (OuterVolumeSpecName: "utilities") pod "34cbcbd4-c586-4962-8fe8-c5ccdd822da4" (UID: "34cbcbd4-c586-4962-8fe8-c5ccdd822da4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.351387 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6" (OuterVolumeSpecName: "kube-api-access-8txb6") pod "34cbcbd4-c586-4962-8fe8-c5ccdd822da4" (UID: "34cbcbd4-c586-4962-8fe8-c5ccdd822da4"). InnerVolumeSpecName "kube-api-access-8txb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.375003 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34cbcbd4-c586-4962-8fe8-c5ccdd822da4" (UID: "34cbcbd4-c586-4962-8fe8-c5ccdd822da4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.449821 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.449869 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:56 crc kubenswrapper[4679]: I0203 12:08:56.449886 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txb6\" (UniqueName: \"kubernetes.io/projected/34cbcbd4-c586-4962-8fe8-c5ccdd822da4-kube-api-access-8txb6\") on node \"crc\" DevicePath \"\"" Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.077373 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerStarted","Data":"79ca52eb6b3bd32bd227dedef22431e76b7a31194d87278919d9e42ef00ab884"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.087951 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7sc9p" event={"ID":"34cbcbd4-c586-4962-8fe8-c5ccdd822da4","Type":"ContainerDied","Data":"1173dc48525bcb8c2c0bef38376b95793c1ac84e1d1505028ea7267531ac69fe"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.088042 4679 scope.go:117] "RemoveContainer" containerID="7749b30f66a36d3411b9d0630872ee4d69ab4d339313d77613b7aafdb97c2e78" Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.088143 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7sc9p" Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.090732 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerStarted","Data":"d4021346ff586bed4050a8178450e86b1b2ba769caf23f567a77e30955c99b82"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.093033 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerStarted","Data":"9a3ffc4402381fbd9b59003415773acf5e5a53706bc88385007bb2283b2092ee"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.105239 4679 generic.go:334] "Generic (PLEG): container finished" podID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerID="d180de66ba1059ba2a5e0e103f21cbae0b621d55ea20b791f20d2ad5ba4089e1" exitCode=0 Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.105342 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerDied","Data":"d180de66ba1059ba2a5e0e103f21cbae0b621d55ea20b791f20d2ad5ba4089e1"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.135689 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerStarted","Data":"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.140239 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerStarted","Data":"43caa696ad81d79ad70a31cf49b2a4144c3e4226c92f7e0e42db3dda8275f563"} Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.258423 4679 scope.go:117] "RemoveContainer" containerID="54c5d7c71e33b2bac7e58d3440fd0b433786b5d68d46637b87ec4fd41899e027" Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.274276 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.278611 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7sc9p"] Feb 03 12:08:57 crc kubenswrapper[4679]: I0203 12:08:57.281014 4679 scope.go:117] "RemoveContainer" containerID="c96b2396629ed45618e766d9115ee051ffbdaa9561ff59c3fc5ef571cda649c3" Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.147860 4679 generic.go:334] "Generic (PLEG): container finished" podID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerID="d4021346ff586bed4050a8178450e86b1b2ba769caf23f567a77e30955c99b82" exitCode=0 Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.147968 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerDied","Data":"d4021346ff586bed4050a8178450e86b1b2ba769caf23f567a77e30955c99b82"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.150389 4679 generic.go:334] "Generic (PLEG): container finished" podID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerID="9a3ffc4402381fbd9b59003415773acf5e5a53706bc88385007bb2283b2092ee" exitCode=0 Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.150492 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerDied","Data":"9a3ffc4402381fbd9b59003415773acf5e5a53706bc88385007bb2283b2092ee"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.152613 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerStarted","Data":"f88365545ca2f0b5242e31ffbfbfecb9528c21d5de736bfd564291c5ebbd8c56"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.155389 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerDied","Data":"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.155338 4679 generic.go:334] "Generic (PLEG): container finished" podID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerID="149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693" exitCode=0 Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.158512 4679 generic.go:334] "Generic (PLEG): container finished" podID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerID="43caa696ad81d79ad70a31cf49b2a4144c3e4226c92f7e0e42db3dda8275f563" exitCode=0 Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.158571 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerDied","Data":"43caa696ad81d79ad70a31cf49b2a4144c3e4226c92f7e0e42db3dda8275f563"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.158672 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerStarted","Data":"3e1f19416f484376855a2179087d4220987c376a4581eab2d86ca9e17efff72a"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.161772 4679 generic.go:334] "Generic (PLEG): container finished" podID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerID="79ca52eb6b3bd32bd227dedef22431e76b7a31194d87278919d9e42ef00ab884" exitCode=0 Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.161829 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerDied","Data":"79ca52eb6b3bd32bd227dedef22431e76b7a31194d87278919d9e42ef00ab884"} Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.219225 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" path="/var/lib/kubelet/pods/34cbcbd4-c586-4962-8fe8-c5ccdd822da4/volumes" Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.238052 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4b5zq" podStartSLOduration=3.823973192 podStartE2EDuration="57.238034473s" podCreationTimestamp="2026-02-03 12:08:01 +0000 UTC" firstStartedPulling="2026-02-03 12:08:04.28493534 +0000 UTC m=+156.759831418" lastFinishedPulling="2026-02-03 12:08:57.698996601 +0000 UTC m=+210.173892699" observedRunningTime="2026-02-03 12:08:58.213557426 +0000 UTC m=+210.688453534" watchObservedRunningTime="2026-02-03 12:08:58.238034473 +0000 UTC m=+210.712930561" Feb 03 12:08:58 crc kubenswrapper[4679]: I0203 12:08:58.269564 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7wbv" podStartSLOduration=3.6343207079999997 podStartE2EDuration="59.269534069s" podCreationTimestamp="2026-02-03 12:07:59 +0000 UTC" firstStartedPulling="2026-02-03 12:08:02.044036578 +0000 UTC m=+154.518932666" lastFinishedPulling="2026-02-03 12:08:57.679249939 +0000 UTC m=+210.154146027" observedRunningTime="2026-02-03 12:08:58.263427596 +0000 UTC m=+210.738323694" watchObservedRunningTime="2026-02-03 12:08:58.269534069 +0000 UTC m=+210.744430157" Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.172793 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerStarted","Data":"b039ad2785d6e908631672a3bbcda5faaf1d5f1b81db9af12f4872bd7a8aadde"} Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.174957 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerStarted","Data":"e22c57c2f75ea37f6ca7353b07ab96f968cdf0c276009047518cce6c7e432602"} Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.177342 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerStarted","Data":"cae5fcb48fcdd26431ca353410a02b37732d3da74315d0dedc7493366197ef58"} Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.179676 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerStarted","Data":"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec"} Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.200452 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fr7jk" podStartSLOduration=4.331467003 podStartE2EDuration="1m0.200428643s" podCreationTimestamp="2026-02-03 12:07:59 +0000 UTC" firstStartedPulling="2026-02-03 12:08:03.067852671 +0000 UTC m=+155.542748759" lastFinishedPulling="2026-02-03 12:08:58.936814311 +0000 UTC m=+211.411710399" observedRunningTime="2026-02-03 12:08:59.19786705 +0000 UTC m=+211.672763138" watchObservedRunningTime="2026-02-03 12:08:59.200428643 +0000 UTC m=+211.675324731" Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.219549 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tbtbf" podStartSLOduration=2.571285619 podStartE2EDuration="59.219525827s" podCreationTimestamp="2026-02-03 12:08:00 +0000 UTC" firstStartedPulling="2026-02-03 12:08:02.027807981 +0000 UTC m=+154.502704069" lastFinishedPulling="2026-02-03 12:08:58.676048189 +0000 UTC m=+211.150944277" observedRunningTime="2026-02-03 12:08:59.218810896 +0000 UTC m=+211.693706994" watchObservedRunningTime="2026-02-03 12:08:59.219525827 +0000 UTC m=+211.694421915" Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.249984 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sthk" podStartSLOduration=3.986465138 podStartE2EDuration="57.249955073s" podCreationTimestamp="2026-02-03 12:08:02 +0000 UTC" firstStartedPulling="2026-02-03 12:08:05.461500107 +0000 UTC m=+157.936396195" lastFinishedPulling="2026-02-03 12:08:58.724990042 +0000 UTC m=+211.199886130" observedRunningTime="2026-02-03 12:08:59.245762633 +0000 UTC m=+211.720658721" watchObservedRunningTime="2026-02-03 12:08:59.249955073 +0000 UTC m=+211.724851161" Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.273548 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g6ksm" podStartSLOduration=2.8381607349999998 podStartE2EDuration="56.273505163s" podCreationTimestamp="2026-02-03 12:08:03 +0000 UTC" firstStartedPulling="2026-02-03 12:08:05.435657349 +0000 UTC m=+157.910553437" lastFinishedPulling="2026-02-03 12:08:58.871001777 +0000 UTC m=+211.345897865" observedRunningTime="2026-02-03 12:08:59.269667544 +0000 UTC m=+211.744563632" watchObservedRunningTime="2026-02-03 12:08:59.273505163 +0000 UTC m=+211.748401251" Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.733849 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.734556 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" podUID="14f79770-b6b2-4498-9694-73f3225fcc75" containerName="controller-manager" containerID="cri-o://773f14754ffb456e85e3ae679f32359c307a5f5e8faf2d4b0ef69304c43163e1" gracePeriod=30 Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.777996 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:08:59 crc kubenswrapper[4679]: I0203 12:08:59.778320 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" podUID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" containerName="route-controller-manager" containerID="cri-o://4d8d8f68bf4345e470c3c64a663b40aeacf6bee61420d37046c0403490b42bf8" gracePeriod=30 Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.098220 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.098289 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.167676 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.219893 4679 generic.go:334] "Generic (PLEG): container finished" podID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" containerID="4d8d8f68bf4345e470c3c64a663b40aeacf6bee61420d37046c0403490b42bf8" exitCode=0 Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.223721 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" event={"ID":"1f52f639-8ab0-4e3f-bb45-781cfcc14179","Type":"ContainerDied","Data":"4d8d8f68bf4345e470c3c64a663b40aeacf6bee61420d37046c0403490b42bf8"} Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.237869 4679 generic.go:334] "Generic (PLEG): container finished" podID="14f79770-b6b2-4498-9694-73f3225fcc75" containerID="773f14754ffb456e85e3ae679f32359c307a5f5e8faf2d4b0ef69304c43163e1" exitCode=0 Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.239084 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" event={"ID":"14f79770-b6b2-4498-9694-73f3225fcc75","Type":"ContainerDied","Data":"773f14754ffb456e85e3ae679f32359c307a5f5e8faf2d4b0ef69304c43163e1"} Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.334317 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.421221 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.438302 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert\") pod \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.438407 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca\") pod \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.438477 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqrt\" (UniqueName: \"kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt\") pod \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.438588 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config\") pod \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\" (UID: \"1f52f639-8ab0-4e3f-bb45-781cfcc14179\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.439642 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f52f639-8ab0-4e3f-bb45-781cfcc14179" (UID: "1f52f639-8ab0-4e3f-bb45-781cfcc14179"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.439810 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config" (OuterVolumeSpecName: "config") pod "1f52f639-8ab0-4e3f-bb45-781cfcc14179" (UID: "1f52f639-8ab0-4e3f-bb45-781cfcc14179"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.449560 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt" (OuterVolumeSpecName: "kube-api-access-xkqrt") pod "1f52f639-8ab0-4e3f-bb45-781cfcc14179" (UID: "1f52f639-8ab0-4e3f-bb45-781cfcc14179"). InnerVolumeSpecName "kube-api-access-xkqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.452525 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f52f639-8ab0-4e3f-bb45-781cfcc14179" (UID: "1f52f639-8ab0-4e3f-bb45-781cfcc14179"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.540374 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config\") pod \"14f79770-b6b2-4498-9694-73f3225fcc75\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.540728 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles\") pod \"14f79770-b6b2-4498-9694-73f3225fcc75\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.540817 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwqnj\" (UniqueName: \"kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj\") pod \"14f79770-b6b2-4498-9694-73f3225fcc75\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.540934 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert\") pod \"14f79770-b6b2-4498-9694-73f3225fcc75\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541030 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca\") pod \"14f79770-b6b2-4498-9694-73f3225fcc75\" (UID: \"14f79770-b6b2-4498-9694-73f3225fcc75\") " Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541468 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541567 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f52f639-8ab0-4e3f-bb45-781cfcc14179-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541632 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f52f639-8ab0-4e3f-bb45-781cfcc14179-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541703 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqrt\" (UniqueName: \"kubernetes.io/projected/1f52f639-8ab0-4e3f-bb45-781cfcc14179-kube-api-access-xkqrt\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541619 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config" (OuterVolumeSpecName: "config") pod "14f79770-b6b2-4498-9694-73f3225fcc75" (UID: "14f79770-b6b2-4498-9694-73f3225fcc75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.541685 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca" (OuterVolumeSpecName: "client-ca") pod "14f79770-b6b2-4498-9694-73f3225fcc75" (UID: "14f79770-b6b2-4498-9694-73f3225fcc75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.542008 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14f79770-b6b2-4498-9694-73f3225fcc75" (UID: "14f79770-b6b2-4498-9694-73f3225fcc75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.544644 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj" (OuterVolumeSpecName: "kube-api-access-bwqnj") pod "14f79770-b6b2-4498-9694-73f3225fcc75" (UID: "14f79770-b6b2-4498-9694-73f3225fcc75"). InnerVolumeSpecName "kube-api-access-bwqnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.544726 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14f79770-b6b2-4498-9694-73f3225fcc75" (UID: "14f79770-b6b2-4498-9694-73f3225fcc75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.641990 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.642116 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.648979 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.649012 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.649026 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwqnj\" (UniqueName: \"kubernetes.io/projected/14f79770-b6b2-4498-9694-73f3225fcc75-kube-api-access-bwqnj\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.649037 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14f79770-b6b2-4498-9694-73f3225fcc75-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:00 crc kubenswrapper[4679]: I0203 12:09:00.649047 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14f79770-b6b2-4498-9694-73f3225fcc75-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.247768 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" event={"ID":"14f79770-b6b2-4498-9694-73f3225fcc75","Type":"ContainerDied","Data":"74c546ed895966cb23c7905f9a7b42cb6590b6fae322caa57379a1ec834a6316"} Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.248230 4679 scope.go:117] "RemoveContainer" containerID="773f14754ffb456e85e3ae679f32359c307a5f5e8faf2d4b0ef69304c43163e1" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.247888 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d79b78bc-v469p" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.249792 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.250499 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v" event={"ID":"1f52f639-8ab0-4e3f-bb45-781cfcc14179","Type":"ContainerDied","Data":"d9f1c974cc674a11e698a0c9090d3065d5483b6267251b3ce1de429df56d52b8"} Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.270242 4679 scope.go:117] "RemoveContainer" containerID="4d8d8f68bf4345e470c3c64a663b40aeacf6bee61420d37046c0403490b42bf8" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.289231 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.295040 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d79b78bc-v469p"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.300508 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.300568 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.307208 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.310271 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788fcc4689-c4h7v"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.342408 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.684695 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tbtbf" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:01 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:01 crc kubenswrapper[4679]: > Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.760787 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761190 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="extract-content" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761224 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="extract-content" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761239 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="extract-utilities" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761247 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="extract-utilities" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761257 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="extract-content" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761265 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="extract-content" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761279 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761293 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761303 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761316 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761328 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" containerName="route-controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761335 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" containerName="route-controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761348 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f79770-b6b2-4498-9694-73f3225fcc75" containerName="controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761371 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f79770-b6b2-4498-9694-73f3225fcc75" containerName="controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: E0203 12:09:01.761383 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="extract-utilities" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761389 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="extract-utilities" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761508 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" containerName="route-controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761522 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="34cbcbd4-c586-4962-8fe8-c5ccdd822da4" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761530 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="8994b452-6ee1-4dac-8bcc-90fc7fd8e8d7" containerName="registry-server" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.761537 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f79770-b6b2-4498-9694-73f3225fcc75" containerName="controller-manager" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.762039 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.764708 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.764863 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.765527 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.765945 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.766268 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.766334 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.766420 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.766941 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.771899 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.772816 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.774875 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.775003 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.775283 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.775504 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.777611 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.782704 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.794160 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866104 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866189 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866457 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866580 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866791 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlclh\" (UniqueName: \"kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866866 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44p6v\" (UniqueName: \"kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866907 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.866969 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.867036 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968723 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968782 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968817 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968844 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968889 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44p6v\" (UniqueName: \"kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968913 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlclh\" (UniqueName: \"kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968935 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968960 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.968980 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.969880 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.970084 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.970228 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.970780 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.971408 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.976080 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.977939 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.990227 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44p6v\" (UniqueName: \"kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v\") pod \"route-controller-manager-57644457cf-4hfvs\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:01 crc kubenswrapper[4679]: I0203 12:09:01.992262 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlclh\" (UniqueName: \"kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh\") pod \"controller-manager-64bc7f5dfc-b8h4s\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.081776 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.090827 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.225478 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f79770-b6b2-4498-9694-73f3225fcc75" path="/var/lib/kubelet/pods/14f79770-b6b2-4498-9694-73f3225fcc75/volumes" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.226577 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f52f639-8ab0-4e3f-bb45-781cfcc14179" path="/var/lib/kubelet/pods/1f52f639-8ab0-4e3f-bb45-781cfcc14179/volumes" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.314317 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.324722 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.324783 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.371332 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:09:02 crc kubenswrapper[4679]: I0203 12:09:02.578949 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:02 crc kubenswrapper[4679]: W0203 12:09:02.586043 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c18c02_8137_4d89_b270_09b45a113843.slice/crio-f473bea2fd007f096ff7323f19e96c07ad4a340fb25bf161eea86549ab6aeab2 WatchSource:0}: Error finding container f473bea2fd007f096ff7323f19e96c07ad4a340fb25bf161eea86549ab6aeab2: Status 404 returned error can't find the container with id f473bea2fd007f096ff7323f19e96c07ad4a340fb25bf161eea86549ab6aeab2 Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.268286 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" event={"ID":"a431e2e6-7d47-45e3-91cc-5aac63ef8049","Type":"ContainerStarted","Data":"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8"} Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.268792 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.268811 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" event={"ID":"a431e2e6-7d47-45e3-91cc-5aac63ef8049","Type":"ContainerStarted","Data":"d53256c434107037aae254a686e720002f1f962e8fe7eb88caed3ffe76bd737c"} Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.271434 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" event={"ID":"a8c18c02-8137-4d89-b270-09b45a113843","Type":"ContainerStarted","Data":"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87"} Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.271499 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" event={"ID":"a8c18c02-8137-4d89-b270-09b45a113843","Type":"ContainerStarted","Data":"f473bea2fd007f096ff7323f19e96c07ad4a340fb25bf161eea86549ab6aeab2"} Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.274731 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.315858 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" podStartSLOduration=4.315838139 podStartE2EDuration="4.315838139s" podCreationTimestamp="2026-02-03 12:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:09:03.294561663 +0000 UTC m=+215.769457761" watchObservedRunningTime="2026-02-03 12:09:03.315838139 +0000 UTC m=+215.790734227" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.317054 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" podStartSLOduration=4.317047193 podStartE2EDuration="4.317047193s" podCreationTimestamp="2026-02-03 12:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:09:03.313963106 +0000 UTC m=+215.788859194" watchObservedRunningTime="2026-02-03 12:09:03.317047193 +0000 UTC m=+215.791943281" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.325198 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.334631 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.334714 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.768145 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:03 crc kubenswrapper[4679]: I0203 12:09:03.769158 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:04 crc kubenswrapper[4679]: I0203 12:09:04.276970 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:04 crc kubenswrapper[4679]: I0203 12:09:04.285082 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:04 crc kubenswrapper[4679]: I0203 12:09:04.385998 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2sthk" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:04 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:04 crc kubenswrapper[4679]: > Feb 03 12:09:04 crc kubenswrapper[4679]: I0203 12:09:04.811407 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g6ksm" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="registry-server" probeResult="failure" output=< Feb 03 12:09:04 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:09:04 crc kubenswrapper[4679]: > Feb 03 12:09:06 crc kubenswrapper[4679]: I0203 12:09:06.736219 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:09:06 crc kubenswrapper[4679]: I0203 12:09:06.736309 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:09:06 crc kubenswrapper[4679]: I0203 12:09:06.736417 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:09:06 crc kubenswrapper[4679]: I0203 12:09:06.737224 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:09:06 crc kubenswrapper[4679]: I0203 12:09:06.737350 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd" gracePeriod=600 Feb 03 12:09:07 crc kubenswrapper[4679]: I0203 12:09:07.297291 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd" exitCode=0 Feb 03 12:09:07 crc kubenswrapper[4679]: I0203 12:09:07.297420 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd"} Feb 03 12:09:07 crc kubenswrapper[4679]: I0203 12:09:07.297814 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee"} Feb 03 12:09:09 crc kubenswrapper[4679]: I0203 12:09:09.500317 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-86rk7"] Feb 03 12:09:10 crc kubenswrapper[4679]: I0203 12:09:10.147229 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:09:10 crc kubenswrapper[4679]: I0203 12:09:10.692556 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:10 crc kubenswrapper[4679]: I0203 12:09:10.745959 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:11 crc kubenswrapper[4679]: I0203 12:09:11.368304 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:09:11 crc kubenswrapper[4679]: I0203 12:09:11.374709 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:09:12 crc kubenswrapper[4679]: I0203 12:09:12.330328 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tbtbf" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="registry-server" containerID="cri-o://cae5fcb48fcdd26431ca353410a02b37732d3da74315d0dedc7493366197ef58" gracePeriod=2 Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.338952 4679 generic.go:334] "Generic (PLEG): container finished" podID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerID="cae5fcb48fcdd26431ca353410a02b37732d3da74315d0dedc7493366197ef58" exitCode=0 Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.339071 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerDied","Data":"cae5fcb48fcdd26431ca353410a02b37732d3da74315d0dedc7493366197ef58"} Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.339575 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbtbf" event={"ID":"ef00d1e7-934d-4a44-8301-d9a778fe78d9","Type":"ContainerDied","Data":"0149f5b7791b6c0a1b8fb656f10a56ee56132521c547f277a477747a6996749d"} Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.339601 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0149f5b7791b6c0a1b8fb656f10a56ee56132521c547f277a477747a6996749d" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.346820 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.382391 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.445982 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.539798 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content\") pod \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.539967 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities\") pod \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.540043 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkmh4\" (UniqueName: \"kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4\") pod \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\" (UID: \"ef00d1e7-934d-4a44-8301-d9a778fe78d9\") " Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.541113 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities" (OuterVolumeSpecName: "utilities") pod "ef00d1e7-934d-4a44-8301-d9a778fe78d9" (UID: "ef00d1e7-934d-4a44-8301-d9a778fe78d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.548949 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4" (OuterVolumeSpecName: "kube-api-access-dkmh4") pod "ef00d1e7-934d-4a44-8301-d9a778fe78d9" (UID: "ef00d1e7-934d-4a44-8301-d9a778fe78d9"). InnerVolumeSpecName "kube-api-access-dkmh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.599834 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef00d1e7-934d-4a44-8301-d9a778fe78d9" (UID: "ef00d1e7-934d-4a44-8301-d9a778fe78d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.641795 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkmh4\" (UniqueName: \"kubernetes.io/projected/ef00d1e7-934d-4a44-8301-d9a778fe78d9-kube-api-access-dkmh4\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.641842 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.641856 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef00d1e7-934d-4a44-8301-d9a778fe78d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.814785 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:13 crc kubenswrapper[4679]: I0203 12:09:13.856600 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:14 crc kubenswrapper[4679]: I0203 12:09:14.344602 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbtbf" Feb 03 12:09:14 crc kubenswrapper[4679]: I0203 12:09:14.364508 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:09:14 crc kubenswrapper[4679]: I0203 12:09:14.368629 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tbtbf"] Feb 03 12:09:15 crc kubenswrapper[4679]: I0203 12:09:15.780043 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:09:15 crc kubenswrapper[4679]: I0203 12:09:15.780414 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g6ksm" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="registry-server" containerID="cri-o://f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec" gracePeriod=2 Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.219880 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" path="/var/lib/kubelet/pods/ef00d1e7-934d-4a44-8301-d9a778fe78d9/volumes" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.298695 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.357417 4679 generic.go:334] "Generic (PLEG): container finished" podID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerID="f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec" exitCode=0 Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.357496 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerDied","Data":"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec"} Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.357533 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g6ksm" event={"ID":"7a48df33-a76e-47c7-a418-d60f2b7f74de","Type":"ContainerDied","Data":"bd05af71053e390603b75958a641c97833637c0018654f16de5f56ab8e570b63"} Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.357525 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g6ksm" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.357554 4679 scope.go:117] "RemoveContainer" containerID="f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.374765 4679 scope.go:117] "RemoveContainer" containerID="149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.391835 4679 scope.go:117] "RemoveContainer" containerID="e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.408058 4679 scope.go:117] "RemoveContainer" containerID="f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec" Feb 03 12:09:16 crc kubenswrapper[4679]: E0203 12:09:16.408565 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec\": container with ID starting with f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec not found: ID does not exist" containerID="f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.408625 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec"} err="failed to get container status \"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec\": rpc error: code = NotFound desc = could not find container \"f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec\": container with ID starting with f4c0785a7bc33a2e868def14ecae4234b297ae723a220b67e38505919030b7ec not found: ID does not exist" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.408659 4679 scope.go:117] "RemoveContainer" containerID="149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693" Feb 03 12:09:16 crc kubenswrapper[4679]: E0203 12:09:16.408949 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693\": container with ID starting with 149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693 not found: ID does not exist" containerID="149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.408976 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693"} err="failed to get container status \"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693\": rpc error: code = NotFound desc = could not find container \"149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693\": container with ID starting with 149ebe38e4643cf67aebe1d9b3b30b6015efea68088aa2486993d45debcd4693 not found: ID does not exist" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.408993 4679 scope.go:117] "RemoveContainer" containerID="e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a" Feb 03 12:09:16 crc kubenswrapper[4679]: E0203 12:09:16.409322 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a\": container with ID starting with e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a not found: ID does not exist" containerID="e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.409373 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a"} err="failed to get container status \"e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a\": rpc error: code = NotFound desc = could not find container \"e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a\": container with ID starting with e1b046a853f68a54c9353419c6df941bbf4ed929ae8521a2e3e0d0b2e6b8505a not found: ID does not exist" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.479203 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities\") pod \"7a48df33-a76e-47c7-a418-d60f2b7f74de\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.479263 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfsn\" (UniqueName: \"kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn\") pod \"7a48df33-a76e-47c7-a418-d60f2b7f74de\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.479330 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content\") pod \"7a48df33-a76e-47c7-a418-d60f2b7f74de\" (UID: \"7a48df33-a76e-47c7-a418-d60f2b7f74de\") " Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.480501 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities" (OuterVolumeSpecName: "utilities") pod "7a48df33-a76e-47c7-a418-d60f2b7f74de" (UID: "7a48df33-a76e-47c7-a418-d60f2b7f74de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.484551 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn" (OuterVolumeSpecName: "kube-api-access-dwfsn") pod "7a48df33-a76e-47c7-a418-d60f2b7f74de" (UID: "7a48df33-a76e-47c7-a418-d60f2b7f74de"). InnerVolumeSpecName "kube-api-access-dwfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.580978 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.581371 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfsn\" (UniqueName: \"kubernetes.io/projected/7a48df33-a76e-47c7-a418-d60f2b7f74de-kube-api-access-dwfsn\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.603166 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a48df33-a76e-47c7-a418-d60f2b7f74de" (UID: "7a48df33-a76e-47c7-a418-d60f2b7f74de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.687564 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a48df33-a76e-47c7-a418-d60f2b7f74de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.688282 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:09:16 crc kubenswrapper[4679]: I0203 12:09:16.691681 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g6ksm"] Feb 03 12:09:18 crc kubenswrapper[4679]: I0203 12:09:18.218808 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" path="/var/lib/kubelet/pods/7a48df33-a76e-47c7-a418-d60f2b7f74de/volumes" Feb 03 12:09:19 crc kubenswrapper[4679]: I0203 12:09:19.719494 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:19 crc kubenswrapper[4679]: I0203 12:09:19.719897 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" podUID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" containerName="controller-manager" containerID="cri-o://749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8" gracePeriod=30 Feb 03 12:09:19 crc kubenswrapper[4679]: I0203 12:09:19.799042 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:19 crc kubenswrapper[4679]: I0203 12:09:19.799746 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" podUID="a8c18c02-8137-4d89-b270-09b45a113843" containerName="route-controller-manager" containerID="cri-o://8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87" gracePeriod=30 Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.316464 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.322041 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.383833 4679 generic.go:334] "Generic (PLEG): container finished" podID="a8c18c02-8137-4d89-b270-09b45a113843" containerID="8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87" exitCode=0 Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.383909 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" event={"ID":"a8c18c02-8137-4d89-b270-09b45a113843","Type":"ContainerDied","Data":"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87"} Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.383942 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" event={"ID":"a8c18c02-8137-4d89-b270-09b45a113843","Type":"ContainerDied","Data":"f473bea2fd007f096ff7323f19e96c07ad4a340fb25bf161eea86549ab6aeab2"} Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.383961 4679 scope.go:117] "RemoveContainer" containerID="8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.384075 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.388791 4679 generic.go:334] "Generic (PLEG): container finished" podID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" containerID="749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8" exitCode=0 Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.388856 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" event={"ID":"a431e2e6-7d47-45e3-91cc-5aac63ef8049","Type":"ContainerDied","Data":"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8"} Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.388896 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" event={"ID":"a431e2e6-7d47-45e3-91cc-5aac63ef8049","Type":"ContainerDied","Data":"d53256c434107037aae254a686e720002f1f962e8fe7eb88caed3ffe76bd737c"} Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.388971 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.426573 4679 scope.go:117] "RemoveContainer" containerID="8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.430561 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87\": container with ID starting with 8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87 not found: ID does not exist" containerID="8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.430637 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87"} err="failed to get container status \"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87\": rpc error: code = NotFound desc = could not find container \"8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87\": container with ID starting with 8011608f7951022c6ebce942ded3a99d419211e294432af66048e83669d76c87 not found: ID does not exist" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.430682 4679 scope.go:117] "RemoveContainer" containerID="749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442562 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles\") pod \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442629 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert\") pod \"a8c18c02-8137-4d89-b270-09b45a113843\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442662 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert\") pod \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442739 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config\") pod \"a8c18c02-8137-4d89-b270-09b45a113843\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442802 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44p6v\" (UniqueName: \"kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v\") pod \"a8c18c02-8137-4d89-b270-09b45a113843\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442829 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca\") pod \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442886 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlclh\" (UniqueName: \"kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh\") pod \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442907 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca\") pod \"a8c18c02-8137-4d89-b270-09b45a113843\" (UID: \"a8c18c02-8137-4d89-b270-09b45a113843\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.442985 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config\") pod \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\" (UID: \"a431e2e6-7d47-45e3-91cc-5aac63ef8049\") " Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.445371 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config" (OuterVolumeSpecName: "config") pod "a8c18c02-8137-4d89-b270-09b45a113843" (UID: "a8c18c02-8137-4d89-b270-09b45a113843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.445509 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca" (OuterVolumeSpecName: "client-ca") pod "a431e2e6-7d47-45e3-91cc-5aac63ef8049" (UID: "a431e2e6-7d47-45e3-91cc-5aac63ef8049"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.446610 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config" (OuterVolumeSpecName: "config") pod "a431e2e6-7d47-45e3-91cc-5aac63ef8049" (UID: "a431e2e6-7d47-45e3-91cc-5aac63ef8049"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.446764 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a431e2e6-7d47-45e3-91cc-5aac63ef8049" (UID: "a431e2e6-7d47-45e3-91cc-5aac63ef8049"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.447151 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca" (OuterVolumeSpecName: "client-ca") pod "a8c18c02-8137-4d89-b270-09b45a113843" (UID: "a8c18c02-8137-4d89-b270-09b45a113843"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.453290 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh" (OuterVolumeSpecName: "kube-api-access-hlclh") pod "a431e2e6-7d47-45e3-91cc-5aac63ef8049" (UID: "a431e2e6-7d47-45e3-91cc-5aac63ef8049"). InnerVolumeSpecName "kube-api-access-hlclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.454788 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a431e2e6-7d47-45e3-91cc-5aac63ef8049" (UID: "a431e2e6-7d47-45e3-91cc-5aac63ef8049"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.456723 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v" (OuterVolumeSpecName: "kube-api-access-44p6v") pod "a8c18c02-8137-4d89-b270-09b45a113843" (UID: "a8c18c02-8137-4d89-b270-09b45a113843"). InnerVolumeSpecName "kube-api-access-44p6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.457910 4679 scope.go:117] "RemoveContainer" containerID="749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.463011 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8\": container with ID starting with 749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8 not found: ID does not exist" containerID="749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.463072 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8"} err="failed to get container status \"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8\": rpc error: code = NotFound desc = could not find container \"749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8\": container with ID starting with 749eae90d271ca1cc3f75758bc0aeb0dd8d231eafb1a19259e1095f19c2e7ee8 not found: ID does not exist" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.468961 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a8c18c02-8137-4d89-b270-09b45a113843" (UID: "a8c18c02-8137-4d89-b270-09b45a113843"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545209 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545240 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545252 4679 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545263 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c18c02-8137-4d89-b270-09b45a113843-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545272 4679 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a431e2e6-7d47-45e3-91cc-5aac63ef8049-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545280 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8c18c02-8137-4d89-b270-09b45a113843-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545290 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44p6v\" (UniqueName: \"kubernetes.io/projected/a8c18c02-8137-4d89-b270-09b45a113843-kube-api-access-44p6v\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545300 4679 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a431e2e6-7d47-45e3-91cc-5aac63ef8049-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.545308 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlclh\" (UniqueName: \"kubernetes.io/projected/a431e2e6-7d47-45e3-91cc-5aac63ef8049-kube-api-access-hlclh\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.716814 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.719479 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57644457cf-4hfvs"] Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.730695 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.735778 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64bc7f5dfc-b8h4s"] Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771312 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp"] Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771618 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c18c02-8137-4d89-b270-09b45a113843" containerName="route-controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771639 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c18c02-8137-4d89-b270-09b45a113843" containerName="route-controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771657 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="extract-content" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771665 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="extract-content" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771677 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" containerName="controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771685 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" containerName="controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771694 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="extract-content" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771702 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="extract-content" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771714 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771723 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771736 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="extract-utilities" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771742 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="extract-utilities" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771751 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771757 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: E0203 12:09:20.771770 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="extract-utilities" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771777 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="extract-utilities" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771880 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" containerName="controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771897 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c18c02-8137-4d89-b270-09b45a113843" containerName="route-controller-manager" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771909 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef00d1e7-934d-4a44-8301-d9a778fe78d9" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.771916 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a48df33-a76e-47c7-a418-d60f2b7f74de" containerName="registry-server" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.772429 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.775392 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.775737 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.776111 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.776307 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.776140 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.779068 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.789454 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.792948 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp"] Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.951204 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-proxy-ca-bundles\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.951898 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-config\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.952041 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-client-ca\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.952124 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d93c882c-adfb-447c-9c60-64f307c1f6ea-serving-cert\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:20 crc kubenswrapper[4679]: I0203 12:09:20.952209 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw247\" (UniqueName: \"kubernetes.io/projected/d93c882c-adfb-447c-9c60-64f307c1f6ea-kube-api-access-bw247\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.053896 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-client-ca\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.053993 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d93c882c-adfb-447c-9c60-64f307c1f6ea-serving-cert\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.054044 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw247\" (UniqueName: \"kubernetes.io/projected/d93c882c-adfb-447c-9c60-64f307c1f6ea-kube-api-access-bw247\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.054097 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-proxy-ca-bundles\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.054167 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-config\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.055747 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-client-ca\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.056903 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-proxy-ca-bundles\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.057136 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93c882c-adfb-447c-9c60-64f307c1f6ea-config\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.059996 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d93c882c-adfb-447c-9c60-64f307c1f6ea-serving-cert\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.085583 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw247\" (UniqueName: \"kubernetes.io/projected/d93c882c-adfb-447c-9c60-64f307c1f6ea-kube-api-access-bw247\") pod \"controller-manager-77fc9b8d9d-d9bxp\" (UID: \"d93c882c-adfb-447c-9c60-64f307c1f6ea\") " pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.093317 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.544667 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp"] Feb 03 12:09:21 crc kubenswrapper[4679]: W0203 12:09:21.557923 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd93c882c_adfb_447c_9c60_64f307c1f6ea.slice/crio-af95dcb1ac33bd9f8cd4c49539163074e8e407324b6176255f39ec571add9949 WatchSource:0}: Error finding container af95dcb1ac33bd9f8cd4c49539163074e8e407324b6176255f39ec571add9949: Status 404 returned error can't find the container with id af95dcb1ac33bd9f8cd4c49539163074e8e407324b6176255f39ec571add9949 Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.789279 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529"] Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.796257 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.803643 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529"] Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.804177 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.804502 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.804659 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.804555 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.804899 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.805121 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.966563 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-serving-cert\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.967005 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7pw\" (UniqueName: \"kubernetes.io/projected/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-kube-api-access-tp7pw\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.967151 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-client-ca\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:21 crc kubenswrapper[4679]: I0203 12:09:21.967269 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-config\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.068239 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7pw\" (UniqueName: \"kubernetes.io/projected/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-kube-api-access-tp7pw\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.069317 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-client-ca\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.070481 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-client-ca\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.070557 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-config\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.071622 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-config\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.072535 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-serving-cert\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.079056 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-serving-cert\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.087087 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7pw\" (UniqueName: \"kubernetes.io/projected/ebc12fb7-f7e7-4378-88d7-26c74887e4e9-kube-api-access-tp7pw\") pod \"route-controller-manager-59f8fdf958-mz529\" (UID: \"ebc12fb7-f7e7-4378-88d7-26c74887e4e9\") " pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.115742 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.224479 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a431e2e6-7d47-45e3-91cc-5aac63ef8049" path="/var/lib/kubelet/pods/a431e2e6-7d47-45e3-91cc-5aac63ef8049/volumes" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.226209 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c18c02-8137-4d89-b270-09b45a113843" path="/var/lib/kubelet/pods/a8c18c02-8137-4d89-b270-09b45a113843/volumes" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.407324 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" event={"ID":"d93c882c-adfb-447c-9c60-64f307c1f6ea","Type":"ContainerStarted","Data":"b9902761af81ac7cdcb462f64874104968b6684a6d6c30892bf8058d57a23473"} Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.407404 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" event={"ID":"d93c882c-adfb-447c-9c60-64f307c1f6ea","Type":"ContainerStarted","Data":"af95dcb1ac33bd9f8cd4c49539163074e8e407324b6176255f39ec571add9949"} Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.407878 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.419743 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.432178 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529"] Feb 03 12:09:22 crc kubenswrapper[4679]: W0203 12:09:22.438542 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebc12fb7_f7e7_4378_88d7_26c74887e4e9.slice/crio-86858c48ad08f1c5cbb0e1610d05041dcbdc541e80241debbd022afb696b1f4f WatchSource:0}: Error finding container 86858c48ad08f1c5cbb0e1610d05041dcbdc541e80241debbd022afb696b1f4f: Status 404 returned error can't find the container with id 86858c48ad08f1c5cbb0e1610d05041dcbdc541e80241debbd022afb696b1f4f Feb 03 12:09:22 crc kubenswrapper[4679]: I0203 12:09:22.438540 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77fc9b8d9d-d9bxp" podStartSLOduration=3.438520417 podStartE2EDuration="3.438520417s" podCreationTimestamp="2026-02-03 12:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:09:22.437943131 +0000 UTC m=+234.912839219" watchObservedRunningTime="2026-02-03 12:09:22.438520417 +0000 UTC m=+234.913416505" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.086736 4679 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.087467 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466" gracePeriod=15 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.087614 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe" gracePeriod=15 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.087652 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c" gracePeriod=15 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.087716 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519" gracePeriod=15 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.087758 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351" gracePeriod=15 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088073 4679 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088300 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088313 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088325 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088335 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088350 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088376 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088392 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088399 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088411 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088419 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088430 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088438 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.088446 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088453 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088566 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088583 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088595 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088605 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088614 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.088624 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.092065 4679 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.093303 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.097581 4679 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.130191 4679 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187868 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187909 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187927 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187954 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187974 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.187991 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.188019 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.188039 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.290675 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.290775 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.290889 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.290908 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.290929 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291025 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291035 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291051 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291611 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291655 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291677 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291721 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291763 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291779 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291828 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.291944 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.418488 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.420183 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.421034 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe" exitCode=0 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.421073 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c" exitCode=0 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.421081 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519" exitCode=0 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.421091 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351" exitCode=2 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.421117 4679 scope.go:117] "RemoveContainer" containerID="7dfd50d1ccdadd3980218685e70060a3a9dc7a77f4c585d5b067d10e8d617682" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.423866 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" event={"ID":"ebc12fb7-f7e7-4378-88d7-26c74887e4e9","Type":"ContainerStarted","Data":"f11f2b8c7ef2a7250724f5ba6e0b67716e093e50560d240293202e2d3afceeb2"} Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.423907 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" event={"ID":"ebc12fb7-f7e7-4378-88d7-26c74887e4e9","Type":"ContainerStarted","Data":"86858c48ad08f1c5cbb0e1610d05041dcbdc541e80241debbd022afb696b1f4f"} Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.424567 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.425150 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.425757 4679 generic.go:334] "Generic (PLEG): container finished" podID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" containerID="d468bcbbf41be94a3fcc828105cd2b927838c5f3cc0b29bd26ea92620c566806" exitCode=0 Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.425869 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"46d1120c-bf71-4af7-a6c9-7155f6b4404f","Type":"ContainerDied","Data":"d468bcbbf41be94a3fcc828105cd2b927838c5f3cc0b29bd26ea92620c566806"} Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.426876 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.427490 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.430664 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.430768 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.431276 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4679]: I0203 12:09:23.432618 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:23 crc kubenswrapper[4679]: E0203 12:09:23.476235 4679 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890bb46239416d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,LastTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.437055 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.441435 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19"} Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.441567 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6837c503fc68b897dbe52e3d0ef45f61a313260ad54aec27e1847413643ea45f"} Feb 03 12:09:24 crc kubenswrapper[4679]: E0203 12:09:24.442804 4679 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.442799 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.443425 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.755965 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.756776 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.757256 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.813916 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir\") pod \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814020 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock\") pod \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814089 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "46d1120c-bf71-4af7-a6c9-7155f6b4404f" (UID: "46d1120c-bf71-4af7-a6c9-7155f6b4404f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814300 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access\") pod \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\" (UID: \"46d1120c-bf71-4af7-a6c9-7155f6b4404f\") " Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814282 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock" (OuterVolumeSpecName: "var-lock") pod "46d1120c-bf71-4af7-a6c9-7155f6b4404f" (UID: "46d1120c-bf71-4af7-a6c9-7155f6b4404f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814899 4679 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.814938 4679 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/46d1120c-bf71-4af7-a6c9-7155f6b4404f-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.822271 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "46d1120c-bf71-4af7-a6c9-7155f6b4404f" (UID: "46d1120c-bf71-4af7-a6c9-7155f6b4404f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:24 crc kubenswrapper[4679]: I0203 12:09:24.916155 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46d1120c-bf71-4af7-a6c9-7155f6b4404f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.451825 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.453347 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466" exitCode=0 Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.453528 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3681858895e941e5ddd10d00723608b0ebac39ffb77be3b549c2fe54398c852d" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.455909 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.456752 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"46d1120c-bf71-4af7-a6c9-7155f6b4404f","Type":"ContainerDied","Data":"fcc0d5e1d26f1bcb25f4aa79987f7f942aad60271396691bcb419412d537a354"} Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.456881 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc0d5e1d26f1bcb25f4aa79987f7f942aad60271396691bcb419412d537a354" Feb 03 12:09:25 crc kubenswrapper[4679]: E0203 12:09:25.456930 4679 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.474584 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.475026 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.475819 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.476732 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.477178 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.477525 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.477814 4679 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.527902 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.527986 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.528048 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.528191 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.528233 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.528619 4679 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.529660 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.630198 4679 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:25 crc kubenswrapper[4679]: I0203 12:09:25.630249 4679 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.218559 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.461586 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.462800 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.463250 4679 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.463984 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.466382 4679 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.466905 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:26 crc kubenswrapper[4679]: I0203 12:09:26.467264 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:27 crc kubenswrapper[4679]: E0203 12:09:27.100473 4679 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890bb46239416d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,LastTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:09:28 crc kubenswrapper[4679]: I0203 12:09:28.213714 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:28 crc kubenswrapper[4679]: I0203 12:09:28.214238 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:28 crc kubenswrapper[4679]: I0203 12:09:28.214608 4679 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.257682 4679 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.259099 4679 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.259477 4679 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.259984 4679 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.260413 4679 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:29 crc kubenswrapper[4679]: I0203 12:09:29.260450 4679 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.260837 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="200ms" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.461617 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="400ms" Feb 03 12:09:29 crc kubenswrapper[4679]: E0203 12:09:29.862807 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="800ms" Feb 03 12:09:30 crc kubenswrapper[4679]: E0203 12:09:30.663947 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="1.6s" Feb 03 12:09:32 crc kubenswrapper[4679]: E0203 12:09:32.265674 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="3.2s" Feb 03 12:09:34 crc kubenswrapper[4679]: I0203 12:09:34.536569 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" containerID="cri-o://425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504" gracePeriod=15 Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.034492 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.035819 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.036376 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.036659 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.179937 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180015 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180043 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180070 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180095 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180125 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180219 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180252 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180275 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180306 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180418 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180461 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180486 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180517 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8p5w\" (UniqueName: \"kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180548 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login\") pod \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\" (UID: \"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9\") " Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.180834 4679 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.181509 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.181501 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.182502 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.182597 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.187814 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.188135 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.188317 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w" (OuterVolumeSpecName: "kube-api-access-k8p5w") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "kube-api-access-k8p5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.188804 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.189228 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.193278 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.193924 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.194134 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.194777 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" (UID: "4ffabf6d-5a38-41ab-bc7d-dff2741af6c9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310095 4679 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310165 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310183 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8p5w\" (UniqueName: \"kubernetes.io/projected/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-kube-api-access-k8p5w\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310196 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310214 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310227 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310240 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310255 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310270 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310283 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310294 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310312 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.310327 4679 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:09:35 crc kubenswrapper[4679]: E0203 12:09:35.467236 4679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.18:6443: connect: connection refused" interval="6.4s" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.515817 4679 generic.go:334] "Generic (PLEG): container finished" podID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerID="425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504" exitCode=0 Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.515897 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" event={"ID":"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9","Type":"ContainerDied","Data":"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504"} Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.515965 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.516010 4679 scope.go:117] "RemoveContainer" containerID="425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.515991 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" event={"ID":"4ffabf6d-5a38-41ab-bc7d-dff2741af6c9","Type":"ContainerDied","Data":"7373e4bdbaff5f6d4a7cc843d8f2dd91413b6a1e3898035f3f906255c8db4a20"} Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.516976 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.517388 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.517887 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.532608 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.533287 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.533930 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.543486 4679 scope.go:117] "RemoveContainer" containerID="425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504" Feb 03 12:09:35 crc kubenswrapper[4679]: E0203 12:09:35.544027 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504\": container with ID starting with 425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504 not found: ID does not exist" containerID="425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504" Feb 03 12:09:35 crc kubenswrapper[4679]: I0203 12:09:35.544087 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504"} err="failed to get container status \"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504\": rpc error: code = NotFound desc = could not find container \"425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504\": container with ID starting with 425a4e6a8c1dfa54c03101a44e255b8c4fc60498b8d1ee766d0a741196a0d504 not found: ID does not exist" Feb 03 12:09:37 crc kubenswrapper[4679]: E0203 12:09:37.102236 4679 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890bb46239416d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,LastTimestamp:2026-02-03 12:09:23.475617494 +0000 UTC m=+235.950513582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.532026 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.532078 4679 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa" exitCode=1 Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.532112 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa"} Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.532741 4679 scope.go:117] "RemoveContainer" containerID="1061bb9c6f37b4cf2dec1839eaea7413abce0ab4a6e7a01ad3d70bd13061fefa" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.532996 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.533274 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.534413 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:37 crc kubenswrapper[4679]: I0203 12:09:37.534635 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.211746 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.214747 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.221631 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.222711 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.223460 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.224940 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.227590 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.227840 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.228549 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.238316 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.238351 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:38 crc kubenswrapper[4679]: E0203 12:09:38.239004 4679 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.239804 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:38 crc kubenswrapper[4679]: W0203 12:09:38.269563 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3305336d18927245b04b025cbc70cf28124f981e7873712ba96cfa79e09979c5 WatchSource:0}: Error finding container 3305336d18927245b04b025cbc70cf28124f981e7873712ba96cfa79e09979c5: Status 404 returned error can't find the container with id 3305336d18927245b04b025cbc70cf28124f981e7873712ba96cfa79e09979c5 Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.542502 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.542607 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"764bffb096668d1270cb979c34cd63288c58003b8273696fbbaacef961945821"} Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.543734 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.544088 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.544507 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.544775 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.546055 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ea3172d9df38eb07e4b65bdb2231da40691688f232bb3517818902a2601772c3"} Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.546096 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3305336d18927245b04b025cbc70cf28124f981e7873712ba96cfa79e09979c5"} Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.546394 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.546415 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:38 crc kubenswrapper[4679]: E0203 12:09:38.546678 4679 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.547022 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.547326 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.547686 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:38 crc kubenswrapper[4679]: I0203 12:09:38.547993 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.556218 4679 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ea3172d9df38eb07e4b65bdb2231da40691688f232bb3517818902a2601772c3" exitCode=0 Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.556289 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ea3172d9df38eb07e4b65bdb2231da40691688f232bb3517818902a2601772c3"} Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.556711 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.556730 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:39 crc kubenswrapper[4679]: E0203 12:09:39.557440 4679 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.557572 4679 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.558105 4679 status_manager.go:851] "Failed to get status for pod" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.558665 4679 status_manager.go:851] "Failed to get status for pod" podUID="ebc12fb7-f7e7-4378-88d7-26c74887e4e9" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59f8fdf958-mz529\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:39 crc kubenswrapper[4679]: I0203 12:09:39.559021 4679 status_manager.go:851] "Failed to get status for pod" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" pod="openshift-authentication/oauth-openshift-558db77b4-86rk7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-86rk7\": dial tcp 38.129.56.18:6443: connect: connection refused" Feb 03 12:09:40 crc kubenswrapper[4679]: I0203 12:09:40.566934 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4911fbc93840de959763f9975d6afab79b850e9b7b06b8a887c79d879312bbd4"} Feb 03 12:09:40 crc kubenswrapper[4679]: I0203 12:09:40.567492 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d3c02b2dd352c249a3f5c0cb3427b53a53d196aec18049ab5b5b76ff0023cad"} Feb 03 12:09:40 crc kubenswrapper[4679]: I0203 12:09:40.567506 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"87532baf721206caf48618ab06dbcae26332ad9bce8415c6276ca78a8960a840"} Feb 03 12:09:40 crc kubenswrapper[4679]: I0203 12:09:40.567517 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8af44ac783e3401ada28f0b21dda73d1573ebd5acae6fa3103df936011380513"} Feb 03 12:09:41 crc kubenswrapper[4679]: I0203 12:09:41.578302 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"662407653bf4f4c43672c5f852b8210fa3debff7c8f39257b5da19d688f2f868"} Feb 03 12:09:41 crc kubenswrapper[4679]: I0203 12:09:41.578802 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:41 crc kubenswrapper[4679]: I0203 12:09:41.579050 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:41 crc kubenswrapper[4679]: I0203 12:09:41.579084 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:43 crc kubenswrapper[4679]: I0203 12:09:43.240251 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:43 crc kubenswrapper[4679]: I0203 12:09:43.240340 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:43 crc kubenswrapper[4679]: I0203 12:09:43.247611 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:44 crc kubenswrapper[4679]: I0203 12:09:44.073307 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:46 crc kubenswrapper[4679]: I0203 12:09:46.599698 4679 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.389549 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.394064 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.625036 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.625078 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.630092 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:47 crc kubenswrapper[4679]: I0203 12:09:47.630282 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 12:09:48 crc kubenswrapper[4679]: I0203 12:09:48.236924 4679 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fa596095-8f42-47e0-84b5-066c354aac6c" Feb 03 12:09:48 crc kubenswrapper[4679]: I0203 12:09:48.630617 4679 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:48 crc kubenswrapper[4679]: I0203 12:09:48.630667 4679 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55f95947-c090-45ca-8732-acab46870cb6" Feb 03 12:09:48 crc kubenswrapper[4679]: I0203 12:09:48.634684 4679 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fa596095-8f42-47e0-84b5-066c354aac6c" Feb 03 12:09:56 crc kubenswrapper[4679]: I0203 12:09:56.097243 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 12:09:56 crc kubenswrapper[4679]: I0203 12:09:56.298097 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 12:09:56 crc kubenswrapper[4679]: I0203 12:09:56.356143 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 12:09:56 crc kubenswrapper[4679]: I0203 12:09:56.554118 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 12:09:56 crc kubenswrapper[4679]: I0203 12:09:56.941230 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.224412 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.344819 4679 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.349878 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59f8fdf958-mz529" podStartSLOduration=38.349854283 podStartE2EDuration="38.349854283s" podCreationTimestamp="2026-02-03 12:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:09:46.668921951 +0000 UTC m=+259.143818039" watchObservedRunningTime="2026-02-03 12:09:57.349854283 +0000 UTC m=+269.824750361" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.350234 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-86rk7","openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.350300 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.355090 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.372519 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=11.37248865 podStartE2EDuration="11.37248865s" podCreationTimestamp="2026-02-03 12:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:09:57.369476719 +0000 UTC m=+269.844372847" watchObservedRunningTime="2026-02-03 12:09:57.37248865 +0000 UTC m=+269.847384768" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.448276 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.490074 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.625989 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.680436 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.724500 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.751208 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.796017 4679 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.796429 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19" gracePeriod=5 Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.801155 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 12:09:57 crc kubenswrapper[4679]: I0203 12:09:57.902285 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.159777 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.220776 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" path="/var/lib/kubelet/pods/4ffabf6d-5a38-41ab-bc7d-dff2741af6c9/volumes" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.225325 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.445265 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.548839 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.609793 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.858138 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.873225 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.902592 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 12:09:58 crc kubenswrapper[4679]: I0203 12:09:58.934542 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.058067 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.094156 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.177506 4679 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.423603 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.517814 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.660200 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.729530 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.785592 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:09:59 crc kubenswrapper[4679]: I0203 12:09:59.836466 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.034620 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.067785 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.130877 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.182899 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.466465 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.483872 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.498121 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.533541 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.600003 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.729682 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.825763 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.829775 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.844172 4679 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.960531 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 12:10:00 crc kubenswrapper[4679]: I0203 12:10:00.997253 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.001527 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.139862 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.141753 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.249368 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.249389 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.372301 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.393883 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.397943 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.403674 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.414458 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.463850 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.540115 4679 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.743792 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.773157 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.865553 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 12:10:01 crc kubenswrapper[4679]: I0203 12:10:01.904616 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.000340 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.037495 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.048420 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.155246 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.232868 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.255398 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.256305 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.285391 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.299687 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.437336 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.689159 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.690286 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.696076 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.739011 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.745778 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.808544 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.879116 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 12:10:02 crc kubenswrapper[4679]: I0203 12:10:02.966600 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.213883 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.236730 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.238534 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.363426 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.374563 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.374668 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.375095 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.426116 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449314 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449422 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449480 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449511 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449566 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449631 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449670 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449733 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449838 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.449993 4679 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.450011 4679 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.450045 4679 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.450063 4679 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.459623 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.515678 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.524322 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.551451 4679 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.616277 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.653662 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.684886 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.721295 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.721346 4679 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19" exitCode=137 Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.721441 4679 scope.go:117] "RemoveContainer" containerID="75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.721554 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.724452 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.745299 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.746892 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.749314 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.750048 4679 scope.go:117] "RemoveContainer" containerID="75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19" Feb 03 12:10:03 crc kubenswrapper[4679]: E0203 12:10:03.750583 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19\": container with ID starting with 75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19 not found: ID does not exist" containerID="75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.750741 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19"} err="failed to get container status \"75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19\": rpc error: code = NotFound desc = could not find container \"75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19\": container with ID starting with 75de0538a844584e6b4b969ed4142be0b0a55649102de7b4df9842200b5cbf19 not found: ID does not exist" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.768245 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.768472 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.792353 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.887518 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.951148 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 12:10:03 crc kubenswrapper[4679]: I0203 12:10:03.956426 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.078028 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.178830 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.210350 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.226050 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.369747 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.427617 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.429542 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.553200 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.696344 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.698964 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.704053 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.739633 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 12:10:04 crc kubenswrapper[4679]: I0203 12:10:04.942882 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.112638 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.243668 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.268667 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.325415 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.357477 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.403663 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.493091 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.517591 4679 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.785330 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.815871 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.877747 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.989195 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 12:10:05 crc kubenswrapper[4679]: I0203 12:10:05.997494 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.003860 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.011046 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.025969 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.034885 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.050055 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.091539 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.093348 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.139603 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.169228 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.223652 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.245841 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.487332 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.518379 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.528164 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.623163 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.707141 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.768569 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.781120 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.819995 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75566f9bd7-64pfv"] Feb 03 12:10:06 crc kubenswrapper[4679]: E0203 12:10:06.820385 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820406 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:06 crc kubenswrapper[4679]: E0203 12:10:06.820419 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" containerName="installer" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820427 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" containerName="installer" Feb 03 12:10:06 crc kubenswrapper[4679]: E0203 12:10:06.820440 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820449 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820570 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffabf6d-5a38-41ab-bc7d-dff2741af6c9" containerName="oauth-openshift" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820587 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.820598 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d1120c-bf71-4af7-a6c9-7155f6b4404f" containerName="installer" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.821393 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.824065 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.825023 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.825234 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.826066 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.829456 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.829949 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.830041 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.830425 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.830532 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.833328 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.833851 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.835788 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.835938 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75566f9bd7-64pfv"] Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.838575 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.843660 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.846769 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.898311 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bhl\" (UniqueName: \"kubernetes.io/projected/628b0d59-4240-4f86-b482-3b93c363be96-kube-api-access-b6bhl\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.898836 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.898974 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899147 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-service-ca\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899249 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899344 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-audit-policies\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899484 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628b0d59-4240-4f86-b482-3b93c363be96-audit-dir\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899598 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-login\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899727 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-error\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899834 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.899948 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-session\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.900065 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.900163 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-router-certs\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.900265 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:06 crc kubenswrapper[4679]: I0203 12:10:06.925675 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002020 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002087 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-router-certs\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002109 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002146 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bhl\" (UniqueName: \"kubernetes.io/projected/628b0d59-4240-4f86-b482-3b93c363be96-kube-api-access-b6bhl\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002178 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002206 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002239 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-service-ca\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002258 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002286 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-audit-policies\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002314 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628b0d59-4240-4f86-b482-3b93c363be96-audit-dir\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002334 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-login\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002374 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-error\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002394 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.002423 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-session\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.003592 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/628b0d59-4240-4f86-b482-3b93c363be96-audit-dir\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.003810 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-service-ca\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.004092 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.004306 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.005304 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/628b0d59-4240-4f86-b482-3b93c363be96-audit-policies\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.011167 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.011202 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.011252 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-session\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.011271 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.011663 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-router-certs\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.012114 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.012224 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-login\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.015598 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/628b0d59-4240-4f86-b482-3b93c363be96-v4-0-config-user-template-error\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.028854 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bhl\" (UniqueName: \"kubernetes.io/projected/628b0d59-4240-4f86-b482-3b93c363be96-kube-api-access-b6bhl\") pod \"oauth-openshift-75566f9bd7-64pfv\" (UID: \"628b0d59-4240-4f86-b482-3b93c363be96\") " pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.041642 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.042342 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.128625 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.143532 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.153983 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.155630 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.199088 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.289492 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.306207 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.336066 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.353531 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.401090 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.544411 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.558324 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75566f9bd7-64pfv"] Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.625497 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.652062 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.723906 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.750927 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.770995 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" event={"ID":"628b0d59-4240-4f86-b482-3b93c363be96","Type":"ContainerStarted","Data":"28705e9d53ff7424e9128b086f928b6208aa0cbc6b0fc05ec0bee473625fcae5"} Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.896454 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.938799 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 12:10:07 crc kubenswrapper[4679]: I0203 12:10:07.988474 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.000005 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.001608 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.196837 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.262488 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.274026 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.298628 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.343473 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.357771 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.357785 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.382501 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.395554 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.469682 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.529293 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.603860 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.693560 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.727127 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.751876 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.785125 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-75566f9bd7-64pfv_628b0d59-4240-4f86-b482-3b93c363be96/oauth-openshift/0.log" Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.785185 4679 generic.go:334] "Generic (PLEG): container finished" podID="628b0d59-4240-4f86-b482-3b93c363be96" containerID="f0c8405ebb7a2c9d2af7e2058fe9fd4e0d8f97c299dcfcaa5fb0ac648e9fddc9" exitCode=255 Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.785227 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" event={"ID":"628b0d59-4240-4f86-b482-3b93c363be96","Type":"ContainerDied","Data":"f0c8405ebb7a2c9d2af7e2058fe9fd4e0d8f97c299dcfcaa5fb0ac648e9fddc9"} Feb 03 12:10:08 crc kubenswrapper[4679]: I0203 12:10:08.785998 4679 scope.go:117] "RemoveContainer" containerID="f0c8405ebb7a2c9d2af7e2058fe9fd4e0d8f97c299dcfcaa5fb0ac648e9fddc9" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.023817 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.262720 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.269501 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.322331 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.404816 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.479791 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.490670 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.535791 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.549769 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.673714 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.769461 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.793510 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-75566f9bd7-64pfv_628b0d59-4240-4f86-b482-3b93c363be96/oauth-openshift/1.log" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.794336 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-75566f9bd7-64pfv_628b0d59-4240-4f86-b482-3b93c363be96/oauth-openshift/0.log" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.794511 4679 generic.go:334] "Generic (PLEG): container finished" podID="628b0d59-4240-4f86-b482-3b93c363be96" containerID="f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf" exitCode=255 Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.794552 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" event={"ID":"628b0d59-4240-4f86-b482-3b93c363be96","Type":"ContainerDied","Data":"f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf"} Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.794767 4679 scope.go:117] "RemoveContainer" containerID="f0c8405ebb7a2c9d2af7e2058fe9fd4e0d8f97c299dcfcaa5fb0ac648e9fddc9" Feb 03 12:10:09 crc kubenswrapper[4679]: I0203 12:10:09.795204 4679 scope.go:117] "RemoveContainer" containerID="f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf" Feb 03 12:10:09 crc kubenswrapper[4679]: E0203 12:10:09.795660 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-75566f9bd7-64pfv_openshift-authentication(628b0d59-4240-4f86-b482-3b93c363be96)\"" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" podUID="628b0d59-4240-4f86-b482-3b93c363be96" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.008403 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.161948 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.256848 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.318396 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.786665 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.794742 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.803345 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-75566f9bd7-64pfv_628b0d59-4240-4f86-b482-3b93c363be96/oauth-openshift/1.log" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.804500 4679 scope.go:117] "RemoveContainer" containerID="f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf" Feb 03 12:10:10 crc kubenswrapper[4679]: E0203 12:10:10.804809 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-75566f9bd7-64pfv_openshift-authentication(628b0d59-4240-4f86-b482-3b93c363be96)\"" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" podUID="628b0d59-4240-4f86-b482-3b93c363be96" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.844285 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.908924 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.910463 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 12:10:10 crc kubenswrapper[4679]: I0203 12:10:10.928849 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.035302 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.048547 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.109226 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.227558 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.235464 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.312835 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.375832 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.399749 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.430998 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.593678 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 12:10:11 crc kubenswrapper[4679]: I0203 12:10:11.618867 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 12:10:12 crc kubenswrapper[4679]: I0203 12:10:12.005662 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 12:10:12 crc kubenswrapper[4679]: I0203 12:10:12.875421 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 12:10:17 crc kubenswrapper[4679]: I0203 12:10:17.143924 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:17 crc kubenswrapper[4679]: I0203 12:10:17.144536 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:17 crc kubenswrapper[4679]: I0203 12:10:17.145393 4679 scope.go:117] "RemoveContainer" containerID="f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf" Feb 03 12:10:17 crc kubenswrapper[4679]: E0203 12:10:17.145652 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-75566f9bd7-64pfv_openshift-authentication(628b0d59-4240-4f86-b482-3b93c363be96)\"" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" podUID="628b0d59-4240-4f86-b482-3b93c363be96" Feb 03 12:10:23 crc kubenswrapper[4679]: I0203 12:10:23.195084 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 12:10:25 crc kubenswrapper[4679]: I0203 12:10:25.946926 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 12:10:27 crc kubenswrapper[4679]: I0203 12:10:27.037736 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 12:10:27 crc kubenswrapper[4679]: I0203 12:10:27.116348 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 12:10:28 crc kubenswrapper[4679]: I0203 12:10:28.012449 4679 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.212483 4679 scope.go:117] "RemoveContainer" containerID="f8096dac3276af039ea41278f615d7830cba0b9ae90d55b5b5be905abfb28fdf" Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.928460 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-75566f9bd7-64pfv_628b0d59-4240-4f86-b482-3b93c363be96/oauth-openshift/1.log" Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.928873 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" event={"ID":"628b0d59-4240-4f86-b482-3b93c363be96","Type":"ContainerStarted","Data":"282ca274f86fd5e1924de217d3b260b36f69a88ae6fc386e9dfa914f610c79f2"} Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.929503 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.932324 4679 generic.go:334] "Generic (PLEG): container finished" podID="3f09fa03-038d-4042-8f82-ca433431f66a" containerID="800b4f1ba325b667e9e7137bfdc1924b5d076bda977e334dc7610fd86424d2be" exitCode=0 Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.932418 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerDied","Data":"800b4f1ba325b667e9e7137bfdc1924b5d076bda977e334dc7610fd86424d2be"} Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.933400 4679 scope.go:117] "RemoveContainer" containerID="800b4f1ba325b667e9e7137bfdc1924b5d076bda977e334dc7610fd86424d2be" Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.937518 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" Feb 03 12:10:29 crc kubenswrapper[4679]: I0203 12:10:29.953902 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75566f9bd7-64pfv" podStartSLOduration=80.953842685 podStartE2EDuration="1m20.953842685s" podCreationTimestamp="2026-02-03 12:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:29.95323808 +0000 UTC m=+302.428134168" watchObservedRunningTime="2026-02-03 12:10:29.953842685 +0000 UTC m=+302.428738773" Feb 03 12:10:30 crc kubenswrapper[4679]: I0203 12:10:30.481611 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:30 crc kubenswrapper[4679]: I0203 12:10:30.482193 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:30 crc kubenswrapper[4679]: I0203 12:10:30.941746 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerStarted","Data":"9bda1b123e2fabcf2078d523107e59fa38cb64fc9824b495758e4d8955751440"} Feb 03 12:10:30 crc kubenswrapper[4679]: I0203 12:10:30.942186 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:30 crc kubenswrapper[4679]: I0203 12:10:30.947088 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:31 crc kubenswrapper[4679]: I0203 12:10:31.272538 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 12:10:34 crc kubenswrapper[4679]: I0203 12:10:34.790923 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 12:10:35 crc kubenswrapper[4679]: I0203 12:10:35.923319 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 12:10:37 crc kubenswrapper[4679]: I0203 12:10:37.494252 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 12:10:37 crc kubenswrapper[4679]: I0203 12:10:37.824634 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 12:10:39 crc kubenswrapper[4679]: I0203 12:10:39.986017 4679 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 12:10:41 crc kubenswrapper[4679]: I0203 12:10:41.597787 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 12:10:42 crc kubenswrapper[4679]: I0203 12:10:42.717175 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 12:10:43 crc kubenswrapper[4679]: I0203 12:10:43.071947 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 12:10:44 crc kubenswrapper[4679]: I0203 12:10:44.843753 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 12:10:46 crc kubenswrapper[4679]: I0203 12:10:46.602714 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 12:10:47 crc kubenswrapper[4679]: I0203 12:10:47.594905 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 12:10:47 crc kubenswrapper[4679]: I0203 12:10:47.600309 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 12:10:49 crc kubenswrapper[4679]: I0203 12:10:49.319806 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 12:10:49 crc kubenswrapper[4679]: I0203 12:10:49.874766 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 12:10:50 crc kubenswrapper[4679]: I0203 12:10:50.576402 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.692244 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.694636 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fr7jk" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="registry-server" containerID="cri-o://b039ad2785d6e908631672a3bbcda5faaf1d5f1b81db9af12f4872bd7a8aadde" gracePeriod=30 Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.705622 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.707826 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7wbv" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="registry-server" containerID="cri-o://3e1f19416f484376855a2179087d4220987c376a4581eab2d86ca9e17efff72a" gracePeriod=30 Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.716291 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.716605 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" containerID="cri-o://9bda1b123e2fabcf2078d523107e59fa38cb64fc9824b495758e4d8955751440" gracePeriod=30 Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.744850 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.745291 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4b5zq" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="registry-server" containerID="cri-o://f88365545ca2f0b5242e31ffbfbfecb9528c21d5de736bfd564291c5ebbd8c56" gracePeriod=30 Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.750475 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.750795 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sthk" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="registry-server" containerID="cri-o://e22c57c2f75ea37f6ca7353b07ab96f968cdf0c276009047518cce6c7e432602" gracePeriod=30 Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.765836 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbtmr"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.766676 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.784233 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbtmr"] Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.820388 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wvs\" (UniqueName: \"kubernetes.io/projected/6d1001e8-7956-4d94-aed4-c482940134f4-kube-api-access-d9wvs\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.820505 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.820559 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.922421 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wvs\" (UniqueName: \"kubernetes.io/projected/6d1001e8-7956-4d94-aed4-c482940134f4-kube-api-access-d9wvs\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.922516 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.922552 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.924107 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.934547 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d1001e8-7956-4d94-aed4-c482940134f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:55 crc kubenswrapper[4679]: I0203 12:10:55.943459 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wvs\" (UniqueName: \"kubernetes.io/projected/6d1001e8-7956-4d94-aed4-c482940134f4-kube-api-access-d9wvs\") pod \"marketplace-operator-79b997595-sbtmr\" (UID: \"6d1001e8-7956-4d94-aed4-c482940134f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.116055 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.139000 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerDied","Data":"9bda1b123e2fabcf2078d523107e59fa38cb64fc9824b495758e4d8955751440"} Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.139068 4679 scope.go:117] "RemoveContainer" containerID="800b4f1ba325b667e9e7137bfdc1924b5d076bda977e334dc7610fd86424d2be" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.139088 4679 generic.go:334] "Generic (PLEG): container finished" podID="3f09fa03-038d-4042-8f82-ca433431f66a" containerID="9bda1b123e2fabcf2078d523107e59fa38cb64fc9824b495758e4d8955751440" exitCode=0 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.144223 4679 generic.go:334] "Generic (PLEG): container finished" podID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerID="3e1f19416f484376855a2179087d4220987c376a4581eab2d86ca9e17efff72a" exitCode=0 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.144339 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerDied","Data":"3e1f19416f484376855a2179087d4220987c376a4581eab2d86ca9e17efff72a"} Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.158097 4679 generic.go:334] "Generic (PLEG): container finished" podID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerID="b039ad2785d6e908631672a3bbcda5faaf1d5f1b81db9af12f4872bd7a8aadde" exitCode=0 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.158276 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerDied","Data":"b039ad2785d6e908631672a3bbcda5faaf1d5f1b81db9af12f4872bd7a8aadde"} Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.165802 4679 generic.go:334] "Generic (PLEG): container finished" podID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerID="e22c57c2f75ea37f6ca7353b07ab96f968cdf0c276009047518cce6c7e432602" exitCode=0 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.165962 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerDied","Data":"e22c57c2f75ea37f6ca7353b07ab96f968cdf0c276009047518cce6c7e432602"} Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.169781 4679 generic.go:334] "Generic (PLEG): container finished" podID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerID="f88365545ca2f0b5242e31ffbfbfecb9528c21d5de736bfd564291c5ebbd8c56" exitCode=0 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.169967 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerDied","Data":"f88365545ca2f0b5242e31ffbfbfecb9528c21d5de736bfd564291c5ebbd8c56"} Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.275167 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.295766 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330169 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") pod \"3f09fa03-038d-4042-8f82-ca433431f66a\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330269 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") pod \"3f09fa03-038d-4042-8f82-ca433431f66a\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330426 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97xbr\" (UniqueName: \"kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr\") pod \"6546cf97-de00-4569-9187-b3e4d69fe5d9\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330463 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities\") pod \"6546cf97-de00-4569-9187-b3e4d69fe5d9\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330509 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ft6c\" (UniqueName: \"kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c\") pod \"3f09fa03-038d-4042-8f82-ca433431f66a\" (UID: \"3f09fa03-038d-4042-8f82-ca433431f66a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.330574 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content\") pod \"6546cf97-de00-4569-9187-b3e4d69fe5d9\" (UID: \"6546cf97-de00-4569-9187-b3e4d69fe5d9\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.331925 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3f09fa03-038d-4042-8f82-ca433431f66a" (UID: "3f09fa03-038d-4042-8f82-ca433431f66a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.332948 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities" (OuterVolumeSpecName: "utilities") pod "6546cf97-de00-4569-9187-b3e4d69fe5d9" (UID: "6546cf97-de00-4569-9187-b3e4d69fe5d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.333577 4679 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.333753 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.337706 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c" (OuterVolumeSpecName: "kube-api-access-6ft6c") pod "3f09fa03-038d-4042-8f82-ca433431f66a" (UID: "3f09fa03-038d-4042-8f82-ca433431f66a"). InnerVolumeSpecName "kube-api-access-6ft6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.339865 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr" (OuterVolumeSpecName: "kube-api-access-97xbr") pod "6546cf97-de00-4569-9187-b3e4d69fe5d9" (UID: "6546cf97-de00-4569-9187-b3e4d69fe5d9"). InnerVolumeSpecName "kube-api-access-97xbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.340326 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3f09fa03-038d-4042-8f82-ca433431f66a" (UID: "3f09fa03-038d-4042-8f82-ca433431f66a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.392635 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6546cf97-de00-4569-9187-b3e4d69fe5d9" (UID: "6546cf97-de00-4569-9187-b3e4d69fe5d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.435457 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97xbr\" (UniqueName: \"kubernetes.io/projected/6546cf97-de00-4569-9187-b3e4d69fe5d9-kube-api-access-97xbr\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.435493 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ft6c\" (UniqueName: \"kubernetes.io/projected/3f09fa03-038d-4042-8f82-ca433431f66a-kube-api-access-6ft6c\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.435505 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6546cf97-de00-4569-9187-b3e4d69fe5d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.435516 4679 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f09fa03-038d-4042-8f82-ca433431f66a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: W0203 12:10:56.606143 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d1001e8_7956_4d94_aed4_c482940134f4.slice/crio-412537785d981e6c8042dcf9b9a90f66eec1daa04b333522fbc574a9f702c673 WatchSource:0}: Error finding container 412537785d981e6c8042dcf9b9a90f66eec1daa04b333522fbc574a9f702c673: Status 404 returned error can't find the container with id 412537785d981e6c8042dcf9b9a90f66eec1daa04b333522fbc574a9f702c673 Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.607044 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbtmr"] Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.705533 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.740921 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities\") pod \"9cb85479-bcf1-4106-9a2b-560b2f20571a\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.741044 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzk4v\" (UniqueName: \"kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v\") pod \"9cb85479-bcf1-4106-9a2b-560b2f20571a\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.741117 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content\") pod \"9cb85479-bcf1-4106-9a2b-560b2f20571a\" (UID: \"9cb85479-bcf1-4106-9a2b-560b2f20571a\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.747283 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v" (OuterVolumeSpecName: "kube-api-access-qzk4v") pod "9cb85479-bcf1-4106-9a2b-560b2f20571a" (UID: "9cb85479-bcf1-4106-9a2b-560b2f20571a"). InnerVolumeSpecName "kube-api-access-qzk4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.750835 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities" (OuterVolumeSpecName: "utilities") pod "9cb85479-bcf1-4106-9a2b-560b2f20571a" (UID: "9cb85479-bcf1-4106-9a2b-560b2f20571a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.804458 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.813701 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.825031 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cb85479-bcf1-4106-9a2b-560b2f20571a" (UID: "9cb85479-bcf1-4106-9a2b-560b2f20571a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.842865 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmlhl\" (UniqueName: \"kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl\") pod \"db9eac1b-370d-46dc-a81c-3f2e0befe712\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.842940 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities\") pod \"db9eac1b-370d-46dc-a81c-3f2e0befe712\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.842981 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content\") pod \"db9eac1b-370d-46dc-a81c-3f2e0befe712\" (UID: \"db9eac1b-370d-46dc-a81c-3f2e0befe712\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843005 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbs62\" (UniqueName: \"kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62\") pod \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843070 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content\") pod \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843112 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities\") pod \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\" (UID: \"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb\") " Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843411 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzk4v\" (UniqueName: \"kubernetes.io/projected/9cb85479-bcf1-4106-9a2b-560b2f20571a-kube-api-access-qzk4v\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843427 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.843437 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb85479-bcf1-4106-9a2b-560b2f20571a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.844213 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities" (OuterVolumeSpecName: "utilities") pod "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" (UID: "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.844481 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities" (OuterVolumeSpecName: "utilities") pod "db9eac1b-370d-46dc-a81c-3f2e0befe712" (UID: "db9eac1b-370d-46dc-a81c-3f2e0befe712"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.848094 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62" (OuterVolumeSpecName: "kube-api-access-mbs62") pod "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" (UID: "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb"). InnerVolumeSpecName "kube-api-access-mbs62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.867193 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl" (OuterVolumeSpecName: "kube-api-access-nmlhl") pod "db9eac1b-370d-46dc-a81c-3f2e0befe712" (UID: "db9eac1b-370d-46dc-a81c-3f2e0befe712"). InnerVolumeSpecName "kube-api-access-nmlhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.871043 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db9eac1b-370d-46dc-a81c-3f2e0befe712" (UID: "db9eac1b-370d-46dc-a81c-3f2e0befe712"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.945206 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmlhl\" (UniqueName: \"kubernetes.io/projected/db9eac1b-370d-46dc-a81c-3f2e0befe712-kube-api-access-nmlhl\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.945249 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.945260 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9eac1b-370d-46dc-a81c-3f2e0befe712-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.945271 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbs62\" (UniqueName: \"kubernetes.io/projected/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-kube-api-access-mbs62\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:56 crc kubenswrapper[4679]: I0203 12:10:56.945280 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.007066 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" (UID: "15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.046332 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.179230 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sthk" event={"ID":"15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb","Type":"ContainerDied","Data":"3b921e524a356a7c313ee6c34361b5f4f371bdbe00c7da09249f89147c8518bf"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.179316 4679 scope.go:117] "RemoveContainer" containerID="e22c57c2f75ea37f6ca7353b07ab96f968cdf0c276009047518cce6c7e432602" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.179267 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sthk" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.185125 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4b5zq" event={"ID":"db9eac1b-370d-46dc-a81c-3f2e0befe712","Type":"ContainerDied","Data":"f1f3519ee3334e8190f66a56f2ac36e0705cb42a10c2695737ad9edb1bc5d0bb"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.185214 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4b5zq" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.191291 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7wbv" event={"ID":"9cb85479-bcf1-4106-9a2b-560b2f20571a","Type":"ContainerDied","Data":"6f252034c2a5a20151ce87c47811221c86369b960ee9523a7396f5b03705d86d"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.191387 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7wbv" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.194964 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr7jk" event={"ID":"6546cf97-de00-4569-9187-b3e4d69fe5d9","Type":"ContainerDied","Data":"bb78eb435dcf6f9be41ad74200a1c8325e003b0e1693d002742cab24c1fcd3e6"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.195688 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr7jk" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.197541 4679 scope.go:117] "RemoveContainer" containerID="d4021346ff586bed4050a8178450e86b1b2ba769caf23f567a77e30955c99b82" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.197707 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" event={"ID":"3f09fa03-038d-4042-8f82-ca433431f66a","Type":"ContainerDied","Data":"006a5e1a4219cfa65ca0f3e96476476f55ace39cafb396d8b02580609c7b6684"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.197781 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zqgbb" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.204395 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" event={"ID":"6d1001e8-7956-4d94-aed4-c482940134f4","Type":"ContainerStarted","Data":"3ec831a2eef38b467922d9846b2ea67281ddc0ce8bbaf7aeae0300b21d9b8739"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.204450 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" event={"ID":"6d1001e8-7956-4d94-aed4-c482940134f4","Type":"ContainerStarted","Data":"412537785d981e6c8042dcf9b9a90f66eec1daa04b333522fbc574a9f702c673"} Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.206892 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.213574 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.243553 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sbtmr" podStartSLOduration=2.243523706 podStartE2EDuration="2.243523706s" podCreationTimestamp="2026-02-03 12:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:10:57.241347887 +0000 UTC m=+329.716243975" watchObservedRunningTime="2026-02-03 12:10:57.243523706 +0000 UTC m=+329.718419794" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.245708 4679 scope.go:117] "RemoveContainer" containerID="86690aa89a90d68ad4487c9bfadafe445ec3b508006aadafb5b6182e6df5a49f" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.277128 4679 scope.go:117] "RemoveContainer" containerID="f88365545ca2f0b5242e31ffbfbfecb9528c21d5de736bfd564291c5ebbd8c56" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.279226 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.283117 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zqgbb"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.285908 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.289253 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fr7jk"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.298312 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.299653 4679 scope.go:117] "RemoveContainer" containerID="d180de66ba1059ba2a5e0e103f21cbae0b621d55ea20b791f20d2ad5ba4089e1" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.302868 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4b5zq"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.323677 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.327306 4679 scope.go:117] "RemoveContainer" containerID="4875059d49a088457a815d9747481ec6eaaf6484e96250f0e7701504ee8b8362" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.329306 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7wbv"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.349165 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.355657 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sthk"] Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.359279 4679 scope.go:117] "RemoveContainer" containerID="3e1f19416f484376855a2179087d4220987c376a4581eab2d86ca9e17efff72a" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.375900 4679 scope.go:117] "RemoveContainer" containerID="43caa696ad81d79ad70a31cf49b2a4144c3e4226c92f7e0e42db3dda8275f563" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.393941 4679 scope.go:117] "RemoveContainer" containerID="c4185b96ed73b8fc2e07401f3c14ce88a262812ddf4808fd3ef2a8379393c34f" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.409110 4679 scope.go:117] "RemoveContainer" containerID="b039ad2785d6e908631672a3bbcda5faaf1d5f1b81db9af12f4872bd7a8aadde" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.424090 4679 scope.go:117] "RemoveContainer" containerID="79ca52eb6b3bd32bd227dedef22431e76b7a31194d87278919d9e42ef00ab884" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.441695 4679 scope.go:117] "RemoveContainer" containerID="edd4569fdf5ab00276b9f1450150fbf17cf689d3b4c4e787f785fef8ea804690" Feb 03 12:10:57 crc kubenswrapper[4679]: I0203 12:10:57.458537 4679 scope.go:117] "RemoveContainer" containerID="9bda1b123e2fabcf2078d523107e59fa38cb64fc9824b495758e4d8955751440" Feb 03 12:10:58 crc kubenswrapper[4679]: I0203 12:10:58.232833 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" path="/var/lib/kubelet/pods/15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb/volumes" Feb 03 12:10:58 crc kubenswrapper[4679]: I0203 12:10:58.234659 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" path="/var/lib/kubelet/pods/3f09fa03-038d-4042-8f82-ca433431f66a/volumes" Feb 03 12:10:58 crc kubenswrapper[4679]: I0203 12:10:58.235163 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" path="/var/lib/kubelet/pods/6546cf97-de00-4569-9187-b3e4d69fe5d9/volumes" Feb 03 12:10:58 crc kubenswrapper[4679]: I0203 12:10:58.236474 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" path="/var/lib/kubelet/pods/9cb85479-bcf1-4106-9a2b-560b2f20571a/volumes" Feb 03 12:10:58 crc kubenswrapper[4679]: I0203 12:10:58.237206 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" path="/var/lib/kubelet/pods/db9eac1b-370d-46dc-a81c-3f2e0befe712/volumes" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.536108 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k87kr"] Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537240 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537257 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537268 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537275 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537290 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537297 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537307 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537314 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537323 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537329 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537342 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537348 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537381 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537388 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537399 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537406 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537415 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537422 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="extract-utilities" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537434 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537441 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537452 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537458 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537470 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537477 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537488 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537496 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: E0203 12:11:22.537507 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537514 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="extract-content" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537625 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537637 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb85479-bcf1-4106-9a2b-560b2f20571a" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537650 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e4f0f5-84bf-4940-8659-ddf4f8a8a8eb" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537664 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="6546cf97-de00-4569-9187-b3e4d69fe5d9" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537672 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9eac1b-370d-46dc-a81c-3f2e0befe712" containerName="registry-server" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.537887 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f09fa03-038d-4042-8f82-ca433431f66a" containerName="marketplace-operator" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.538661 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.549145 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.559162 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k87kr"] Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.655058 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-catalog-content\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.655155 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jfdl\" (UniqueName: \"kubernetes.io/projected/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-kube-api-access-8jfdl\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.655263 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-utilities\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.729547 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.730661 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.733860 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.748257 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.757442 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-utilities\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.757612 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-catalog-content\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.757649 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jfdl\" (UniqueName: \"kubernetes.io/projected/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-kube-api-access-8jfdl\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.758010 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-utilities\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.758283 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-catalog-content\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.798779 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jfdl\" (UniqueName: \"kubernetes.io/projected/39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b-kube-api-access-8jfdl\") pod \"redhat-marketplace-k87kr\" (UID: \"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b\") " pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.859564 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.859640 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbrt\" (UniqueName: \"kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.859705 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.867237 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.960917 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.960989 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbrt\" (UniqueName: \"kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.961071 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.961616 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.961717 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:22 crc kubenswrapper[4679]: I0203 12:11:22.984809 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbrt\" (UniqueName: \"kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt\") pod \"redhat-operators-rs7cm\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:23 crc kubenswrapper[4679]: I0203 12:11:23.048445 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:23 crc kubenswrapper[4679]: I0203 12:11:23.388844 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 12:11:23 crc kubenswrapper[4679]: I0203 12:11:23.499474 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k87kr"] Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.328216 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wwj2t"] Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.330252 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.334703 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.345041 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwj2t"] Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.379901 4679 generic.go:334] "Generic (PLEG): container finished" podID="330ffce1-de6e-4402-8bb5-52976082c21e" containerID="1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662" exitCode=0 Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.379985 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerDied","Data":"1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662"} Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.380038 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerStarted","Data":"aef398a500d0fab6f8747e89ab50bad7d9d940c655079a6471098d92619ba828"} Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.383442 4679 generic.go:334] "Generic (PLEG): container finished" podID="39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b" containerID="8072e80d2d86f7945d3ef899b218af442420e7014eba110026373f1700c7fb20" exitCode=0 Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.383527 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k87kr" event={"ID":"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b","Type":"ContainerDied","Data":"8072e80d2d86f7945d3ef899b218af442420e7014eba110026373f1700c7fb20"} Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.383568 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k87kr" event={"ID":"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b","Type":"ContainerStarted","Data":"16cc2988e810d9543965779b4ac236b69ea91c1f36b9c8e95a452e9435747260"} Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.480101 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkhn\" (UniqueName: \"kubernetes.io/projected/aacd0fa8-7197-42cd-8023-62d7085d86a5-kube-api-access-4gkhn\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.480168 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-utilities\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.480191 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-catalog-content\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.581551 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkhn\" (UniqueName: \"kubernetes.io/projected/aacd0fa8-7197-42cd-8023-62d7085d86a5-kube-api-access-4gkhn\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.581617 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-utilities\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.581638 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-catalog-content\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.582219 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-catalog-content\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.582310 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aacd0fa8-7197-42cd-8023-62d7085d86a5-utilities\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.602515 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkhn\" (UniqueName: \"kubernetes.io/projected/aacd0fa8-7197-42cd-8023-62d7085d86a5-kube-api-access-4gkhn\") pod \"certified-operators-wwj2t\" (UID: \"aacd0fa8-7197-42cd-8023-62d7085d86a5\") " pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.654182 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:24 crc kubenswrapper[4679]: I0203 12:11:24.867107 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwj2t"] Feb 03 12:11:24 crc kubenswrapper[4679]: W0203 12:11:24.875019 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaacd0fa8_7197_42cd_8023_62d7085d86a5.slice/crio-3d8242661173dc11978b0a4ae932904afd1f62cd80e427f244021764165e7836 WatchSource:0}: Error finding container 3d8242661173dc11978b0a4ae932904afd1f62cd80e427f244021764165e7836: Status 404 returned error can't find the container with id 3d8242661173dc11978b0a4ae932904afd1f62cd80e427f244021764165e7836 Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.341941 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vdzrp"] Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.346494 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdzrp"] Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.346713 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.350908 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.394450 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerStarted","Data":"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819"} Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.398839 4679 generic.go:334] "Generic (PLEG): container finished" podID="aacd0fa8-7197-42cd-8023-62d7085d86a5" containerID="bbb4618f944521cdb4e961f0fc3511bf2d98cd84e855331f18908d725d396f59" exitCode=0 Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.398878 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwj2t" event={"ID":"aacd0fa8-7197-42cd-8023-62d7085d86a5","Type":"ContainerDied","Data":"bbb4618f944521cdb4e961f0fc3511bf2d98cd84e855331f18908d725d396f59"} Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.398900 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwj2t" event={"ID":"aacd0fa8-7197-42cd-8023-62d7085d86a5","Type":"ContainerStarted","Data":"3d8242661173dc11978b0a4ae932904afd1f62cd80e427f244021764165e7836"} Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.497208 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-catalog-content\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.497391 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-utilities\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.497460 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfs4w\" (UniqueName: \"kubernetes.io/projected/f718cd3c-d9e9-45d7-abf0-989f2392abf8-kube-api-access-qfs4w\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.598313 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-catalog-content\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.598444 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-utilities\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.598487 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfs4w\" (UniqueName: \"kubernetes.io/projected/f718cd3c-d9e9-45d7-abf0-989f2392abf8-kube-api-access-qfs4w\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.599033 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-catalog-content\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.599333 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f718cd3c-d9e9-45d7-abf0-989f2392abf8-utilities\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.620429 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfs4w\" (UniqueName: \"kubernetes.io/projected/f718cd3c-d9e9-45d7-abf0-989f2392abf8-kube-api-access-qfs4w\") pod \"community-operators-vdzrp\" (UID: \"f718cd3c-d9e9-45d7-abf0-989f2392abf8\") " pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:25 crc kubenswrapper[4679]: I0203 12:11:25.667407 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.077717 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vdzrp"] Feb 03 12:11:26 crc kubenswrapper[4679]: W0203 12:11:26.092237 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf718cd3c_d9e9_45d7_abf0_989f2392abf8.slice/crio-c3a29428e68b5762ce715fe27aeefaabc5eb8b69731a2120a98aefda0367d5fa WatchSource:0}: Error finding container c3a29428e68b5762ce715fe27aeefaabc5eb8b69731a2120a98aefda0367d5fa: Status 404 returned error can't find the container with id c3a29428e68b5762ce715fe27aeefaabc5eb8b69731a2120a98aefda0367d5fa Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.416961 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwj2t" event={"ID":"aacd0fa8-7197-42cd-8023-62d7085d86a5","Type":"ContainerStarted","Data":"69bebb0746ef9578f448d8dffa1deaaf8454a0f85c511fe15d2ca91fe38aea1d"} Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.419005 4679 generic.go:334] "Generic (PLEG): container finished" podID="39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b" containerID="d9649f74b3798571446c34d0b536b90ef3c38ab1f6556a752eceee57beff1616" exitCode=0 Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.419167 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k87kr" event={"ID":"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b","Type":"ContainerDied","Data":"d9649f74b3798571446c34d0b536b90ef3c38ab1f6556a752eceee57beff1616"} Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.423390 4679 generic.go:334] "Generic (PLEG): container finished" podID="330ffce1-de6e-4402-8bb5-52976082c21e" containerID="cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819" exitCode=0 Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.424068 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerDied","Data":"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819"} Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.425519 4679 generic.go:334] "Generic (PLEG): container finished" podID="f718cd3c-d9e9-45d7-abf0-989f2392abf8" containerID="2df832aec88ea1ef99518a296fca9ae7e438c20389b5075296865f695a5ef741" exitCode=0 Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.425541 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdzrp" event={"ID":"f718cd3c-d9e9-45d7-abf0-989f2392abf8","Type":"ContainerDied","Data":"2df832aec88ea1ef99518a296fca9ae7e438c20389b5075296865f695a5ef741"} Feb 03 12:11:26 crc kubenswrapper[4679]: I0203 12:11:26.425555 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdzrp" event={"ID":"f718cd3c-d9e9-45d7-abf0-989f2392abf8","Type":"ContainerStarted","Data":"c3a29428e68b5762ce715fe27aeefaabc5eb8b69731a2120a98aefda0367d5fa"} Feb 03 12:11:27 crc kubenswrapper[4679]: I0203 12:11:27.437025 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdzrp" event={"ID":"f718cd3c-d9e9-45d7-abf0-989f2392abf8","Type":"ContainerStarted","Data":"118df014915a76a312183fbe0b9022f24c8d1eabeb59809d99c23afb5cb3f33d"} Feb 03 12:11:27 crc kubenswrapper[4679]: I0203 12:11:27.450011 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerStarted","Data":"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad"} Feb 03 12:11:27 crc kubenswrapper[4679]: I0203 12:11:27.453301 4679 generic.go:334] "Generic (PLEG): container finished" podID="aacd0fa8-7197-42cd-8023-62d7085d86a5" containerID="69bebb0746ef9578f448d8dffa1deaaf8454a0f85c511fe15d2ca91fe38aea1d" exitCode=0 Feb 03 12:11:27 crc kubenswrapper[4679]: I0203 12:11:27.453393 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwj2t" event={"ID":"aacd0fa8-7197-42cd-8023-62d7085d86a5","Type":"ContainerDied","Data":"69bebb0746ef9578f448d8dffa1deaaf8454a0f85c511fe15d2ca91fe38aea1d"} Feb 03 12:11:27 crc kubenswrapper[4679]: I0203 12:11:27.504269 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rs7cm" podStartSLOduration=2.735260843 podStartE2EDuration="5.504243894s" podCreationTimestamp="2026-02-03 12:11:22 +0000 UTC" firstStartedPulling="2026-02-03 12:11:24.38296544 +0000 UTC m=+356.857861538" lastFinishedPulling="2026-02-03 12:11:27.151948511 +0000 UTC m=+359.626844589" observedRunningTime="2026-02-03 12:11:27.50190848 +0000 UTC m=+359.976804568" watchObservedRunningTime="2026-02-03 12:11:27.504243894 +0000 UTC m=+359.979139992" Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.464862 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwj2t" event={"ID":"aacd0fa8-7197-42cd-8023-62d7085d86a5","Type":"ContainerStarted","Data":"6e5007b4283fa69c3044e6edfc2bbd7f93fcdc1f20209cfe0314f61ee1490899"} Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.476306 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k87kr" event={"ID":"39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b","Type":"ContainerStarted","Data":"c6e739a33c1b8939f465966a2d7da337c869d9300c49565ef00343ba95fc10a8"} Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.484939 4679 generic.go:334] "Generic (PLEG): container finished" podID="f718cd3c-d9e9-45d7-abf0-989f2392abf8" containerID="118df014915a76a312183fbe0b9022f24c8d1eabeb59809d99c23afb5cb3f33d" exitCode=0 Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.485612 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdzrp" event={"ID":"f718cd3c-d9e9-45d7-abf0-989f2392abf8","Type":"ContainerDied","Data":"118df014915a76a312183fbe0b9022f24c8d1eabeb59809d99c23afb5cb3f33d"} Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.486121 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wwj2t" podStartSLOduration=1.899785669 podStartE2EDuration="4.486109518s" podCreationTimestamp="2026-02-03 12:11:24 +0000 UTC" firstStartedPulling="2026-02-03 12:11:25.402222007 +0000 UTC m=+357.877118095" lastFinishedPulling="2026-02-03 12:11:27.988545856 +0000 UTC m=+360.463441944" observedRunningTime="2026-02-03 12:11:28.48103981 +0000 UTC m=+360.955935898" watchObservedRunningTime="2026-02-03 12:11:28.486109518 +0000 UTC m=+360.961005606" Feb 03 12:11:28 crc kubenswrapper[4679]: I0203 12:11:28.505849 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k87kr" podStartSLOduration=3.489780001 podStartE2EDuration="6.505823962s" podCreationTimestamp="2026-02-03 12:11:22 +0000 UTC" firstStartedPulling="2026-02-03 12:11:24.385043306 +0000 UTC m=+356.859939394" lastFinishedPulling="2026-02-03 12:11:27.401087267 +0000 UTC m=+359.875983355" observedRunningTime="2026-02-03 12:11:28.500812416 +0000 UTC m=+360.975708514" watchObservedRunningTime="2026-02-03 12:11:28.505823962 +0000 UTC m=+360.980720050" Feb 03 12:11:30 crc kubenswrapper[4679]: I0203 12:11:30.507223 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vdzrp" event={"ID":"f718cd3c-d9e9-45d7-abf0-989f2392abf8","Type":"ContainerStarted","Data":"89684fa1387599af07808522d8769ab19c571aa92660a9519754faa58fe31d3f"} Feb 03 12:11:30 crc kubenswrapper[4679]: I0203 12:11:30.535935 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vdzrp" podStartSLOduration=2.5144781419999997 podStartE2EDuration="5.535909798s" podCreationTimestamp="2026-02-03 12:11:25 +0000 UTC" firstStartedPulling="2026-02-03 12:11:26.426298206 +0000 UTC m=+358.901194294" lastFinishedPulling="2026-02-03 12:11:29.447729862 +0000 UTC m=+361.922625950" observedRunningTime="2026-02-03 12:11:30.532400073 +0000 UTC m=+363.007296181" watchObservedRunningTime="2026-02-03 12:11:30.535909798 +0000 UTC m=+363.010805886" Feb 03 12:11:32 crc kubenswrapper[4679]: I0203 12:11:32.867986 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:32 crc kubenswrapper[4679]: I0203 12:11:32.868410 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:32 crc kubenswrapper[4679]: I0203 12:11:32.915338 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.049150 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.050432 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.088202 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.561918 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k87kr" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.567314 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.695854 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j976t"] Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.696798 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.801373 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j976t"] Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827022 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kp9\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-kube-api-access-q2kp9\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827068 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-tls\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827107 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-bound-sa-token\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827148 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827182 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dcd0b61e-d5b8-4042-a7df-4bb48644656f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827208 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-certificates\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827257 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-trusted-ca\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.827281 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dcd0b61e-d5b8-4042-a7df-4bb48644656f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.852429 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928301 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-certificates\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928369 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-trusted-ca\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928405 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dcd0b61e-d5b8-4042-a7df-4bb48644656f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928471 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kp9\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-kube-api-access-q2kp9\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928495 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-tls\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928541 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-bound-sa-token\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.928581 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dcd0b61e-d5b8-4042-a7df-4bb48644656f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.930743 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dcd0b61e-d5b8-4042-a7df-4bb48644656f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.932341 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-trusted-ca\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.937123 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dcd0b61e-d5b8-4042-a7df-4bb48644656f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.943089 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-tls\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.954943 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dcd0b61e-d5b8-4042-a7df-4bb48644656f-registry-certificates\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.955854 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kp9\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-kube-api-access-q2kp9\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:33 crc kubenswrapper[4679]: I0203 12:11:33.956869 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dcd0b61e-d5b8-4042-a7df-4bb48644656f-bound-sa-token\") pod \"image-registry-66df7c8f76-j976t\" (UID: \"dcd0b61e-d5b8-4042-a7df-4bb48644656f\") " pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.023516 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.221523 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j976t"] Feb 03 12:11:34 crc kubenswrapper[4679]: W0203 12:11:34.230863 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd0b61e_d5b8_4042_a7df_4bb48644656f.slice/crio-1d5914bdaaaa035e3af193f8d7f47f213f8eca1f0c45ef7e2f90946a77d9db9d WatchSource:0}: Error finding container 1d5914bdaaaa035e3af193f8d7f47f213f8eca1f0c45ef7e2f90946a77d9db9d: Status 404 returned error can't find the container with id 1d5914bdaaaa035e3af193f8d7f47f213f8eca1f0c45ef7e2f90946a77d9db9d Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.532706 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" event={"ID":"dcd0b61e-d5b8-4042-a7df-4bb48644656f","Type":"ContainerStarted","Data":"1d5914bdaaaa035e3af193f8d7f47f213f8eca1f0c45ef7e2f90946a77d9db9d"} Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.654729 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.654811 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:34 crc kubenswrapper[4679]: I0203 12:11:34.698547 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:35 crc kubenswrapper[4679]: I0203 12:11:35.541480 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" event={"ID":"dcd0b61e-d5b8-4042-a7df-4bb48644656f","Type":"ContainerStarted","Data":"34cf01794625d7671dc15bb5f94bdca803c8c340286adc687b96b12b26779d13"} Feb 03 12:11:35 crc kubenswrapper[4679]: I0203 12:11:35.596106 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wwj2t" Feb 03 12:11:35 crc kubenswrapper[4679]: I0203 12:11:35.668114 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:35 crc kubenswrapper[4679]: I0203 12:11:35.668173 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:35 crc kubenswrapper[4679]: I0203 12:11:35.714141 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:36 crc kubenswrapper[4679]: I0203 12:11:36.568087 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" podStartSLOduration=3.568064172 podStartE2EDuration="3.568064172s" podCreationTimestamp="2026-02-03 12:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:11:36.564987448 +0000 UTC m=+369.039883536" watchObservedRunningTime="2026-02-03 12:11:36.568064172 +0000 UTC m=+369.042960260" Feb 03 12:11:36 crc kubenswrapper[4679]: I0203 12:11:36.596268 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vdzrp" Feb 03 12:11:36 crc kubenswrapper[4679]: I0203 12:11:36.735599 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:11:36 crc kubenswrapper[4679]: I0203 12:11:36.735708 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:11:44 crc kubenswrapper[4679]: I0203 12:11:44.024380 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:54 crc kubenswrapper[4679]: I0203 12:11:54.028307 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-j976t" Feb 03 12:11:54 crc kubenswrapper[4679]: I0203 12:11:54.096307 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:12:06 crc kubenswrapper[4679]: I0203 12:12:06.735814 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:12:06 crc kubenswrapper[4679]: I0203 12:12:06.736514 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.144940 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" podUID="053c55aa-a27c-4b37-9a5c-99925bd42082" containerName="registry" containerID="cri-o://60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150" gracePeriod=30 Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.488780 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.585836 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586030 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586064 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lt2\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586113 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586154 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586195 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586229 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.586261 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted\") pod \"053c55aa-a27c-4b37-9a5c-99925bd42082\" (UID: \"053c55aa-a27c-4b37-9a5c-99925bd42082\") " Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.587384 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.587547 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.594428 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.595531 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2" (OuterVolumeSpecName: "kube-api-access-22lt2") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "kube-api-access-22lt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.595895 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.599672 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.601091 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.603178 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "053c55aa-a27c-4b37-9a5c-99925bd42082" (UID: "053c55aa-a27c-4b37-9a5c-99925bd42082"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689267 4679 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689332 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lt2\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-kube-api-access-22lt2\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689351 4679 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/053c55aa-a27c-4b37-9a5c-99925bd42082-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689389 4679 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689402 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/053c55aa-a27c-4b37-9a5c-99925bd42082-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689412 4679 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/053c55aa-a27c-4b37-9a5c-99925bd42082-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:19 crc kubenswrapper[4679]: I0203 12:12:19.689424 4679 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/053c55aa-a27c-4b37-9a5c-99925bd42082-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.007232 4679 generic.go:334] "Generic (PLEG): container finished" podID="053c55aa-a27c-4b37-9a5c-99925bd42082" containerID="60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150" exitCode=0 Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.007304 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" event={"ID":"053c55aa-a27c-4b37-9a5c-99925bd42082","Type":"ContainerDied","Data":"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150"} Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.007393 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" event={"ID":"053c55aa-a27c-4b37-9a5c-99925bd42082","Type":"ContainerDied","Data":"98bee677d9a807a1a760e8682be83bd61dfee1ef98df00e6cab1f2523256762a"} Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.007336 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-klcrz" Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.007414 4679 scope.go:117] "RemoveContainer" containerID="60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150" Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.028071 4679 scope.go:117] "RemoveContainer" containerID="60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150" Feb 03 12:12:20 crc kubenswrapper[4679]: E0203 12:12:20.029644 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150\": container with ID starting with 60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150 not found: ID does not exist" containerID="60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150" Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.029705 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150"} err="failed to get container status \"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150\": rpc error: code = NotFound desc = could not find container \"60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150\": container with ID starting with 60f0e63728245de840341b3480ca1d99046a734812663dac06e8f11acef77150 not found: ID does not exist" Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.044432 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.048338 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-klcrz"] Feb 03 12:12:20 crc kubenswrapper[4679]: I0203 12:12:20.219048 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053c55aa-a27c-4b37-9a5c-99925bd42082" path="/var/lib/kubelet/pods/053c55aa-a27c-4b37-9a5c-99925bd42082/volumes" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.446450 4679 scope.go:117] "RemoveContainer" containerID="3e9a3901afad9febbbe0906f55c01e2d7f1dc1c1df91cf7059a696517ee8b0fe" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.468342 4679 scope.go:117] "RemoveContainer" containerID="8e63de8bd86f08b59b28ddda16fa17c31414491a9e34dd59c59597718223fc4d" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.492241 4679 scope.go:117] "RemoveContainer" containerID="dd88eb89c4e4cd031341c0720b53b479d1803c49b97e0eed110751fadc100e1c" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.509162 4679 scope.go:117] "RemoveContainer" containerID="b0a17152b5424f014f9c42797b95164cdb531adc69572d52cf5f5cf18ac5a351" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.522143 4679 scope.go:117] "RemoveContainer" containerID="665d473a1e0741025ea749214d35454f9cd5dc5efa5cc9c9e5fd2eea0108c466" Feb 03 12:12:28 crc kubenswrapper[4679]: I0203 12:12:28.538231 4679 scope.go:117] "RemoveContainer" containerID="00400bc49cbe52e9fccd6aaf8e74f513550e44aa0d1ace9370502cefb8c86519" Feb 03 12:12:36 crc kubenswrapper[4679]: I0203 12:12:36.736207 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:12:36 crc kubenswrapper[4679]: I0203 12:12:36.737180 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:12:36 crc kubenswrapper[4679]: I0203 12:12:36.737262 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:12:36 crc kubenswrapper[4679]: I0203 12:12:36.738471 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:12:36 crc kubenswrapper[4679]: I0203 12:12:36.738596 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee" gracePeriod=600 Feb 03 12:12:37 crc kubenswrapper[4679]: I0203 12:12:37.124422 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee" exitCode=0 Feb 03 12:12:37 crc kubenswrapper[4679]: I0203 12:12:37.124516 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee"} Feb 03 12:12:37 crc kubenswrapper[4679]: I0203 12:12:37.124600 4679 scope.go:117] "RemoveContainer" containerID="765e5f962fdb838d0e577ebc0e1a5b1b9dfe074cfe6ecd5245b8f4a72378a1bd" Feb 03 12:12:38 crc kubenswrapper[4679]: I0203 12:12:38.134300 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19"} Feb 03 12:14:28 crc kubenswrapper[4679]: I0203 12:14:28.589402 4679 scope.go:117] "RemoveContainer" containerID="a424b23d2959f5b112c5fb2f9bce223b83bacb7cab5413d13c11fbc4a84c876a" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.178326 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd"] Feb 03 12:15:00 crc kubenswrapper[4679]: E0203 12:15:00.179845 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053c55aa-a27c-4b37-9a5c-99925bd42082" containerName="registry" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.182750 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="053c55aa-a27c-4b37-9a5c-99925bd42082" containerName="registry" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.183172 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="053c55aa-a27c-4b37-9a5c-99925bd42082" containerName="registry" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.184878 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.189662 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.189918 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.199776 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd"] Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.359352 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpcs\" (UniqueName: \"kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.359469 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.359497 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.460676 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjpcs\" (UniqueName: \"kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.460764 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.460801 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.462108 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.466685 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.479660 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjpcs\" (UniqueName: \"kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs\") pod \"collect-profiles-29502015-r2krd\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.514676 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.712265 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd"] Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.985779 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" event={"ID":"22114ac5-b236-4e7c-ba0a-5703373937b2","Type":"ContainerStarted","Data":"57fbf4c40bdc33bd0f195128acf9911c1ef9038b0f6d97dfada7c1aee2b196fc"} Feb 03 12:15:00 crc kubenswrapper[4679]: I0203 12:15:00.986296 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" event={"ID":"22114ac5-b236-4e7c-ba0a-5703373937b2","Type":"ContainerStarted","Data":"0a44d162f4d3e912bbdcc78e55e5d4fa1056d56fd4713a8127ea7ef648061c88"} Feb 03 12:15:01 crc kubenswrapper[4679]: I0203 12:15:01.004177 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" podStartSLOduration=1.004149479 podStartE2EDuration="1.004149479s" podCreationTimestamp="2026-02-03 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:15:01.001741633 +0000 UTC m=+573.476637741" watchObservedRunningTime="2026-02-03 12:15:01.004149479 +0000 UTC m=+573.479045567" Feb 03 12:15:01 crc kubenswrapper[4679]: I0203 12:15:01.995334 4679 generic.go:334] "Generic (PLEG): container finished" podID="22114ac5-b236-4e7c-ba0a-5703373937b2" containerID="57fbf4c40bdc33bd0f195128acf9911c1ef9038b0f6d97dfada7c1aee2b196fc" exitCode=0 Feb 03 12:15:01 crc kubenswrapper[4679]: I0203 12:15:01.995423 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" event={"ID":"22114ac5-b236-4e7c-ba0a-5703373937b2","Type":"ContainerDied","Data":"57fbf4c40bdc33bd0f195128acf9911c1ef9038b0f6d97dfada7c1aee2b196fc"} Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.240858 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.301812 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume\") pod \"22114ac5-b236-4e7c-ba0a-5703373937b2\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.301915 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume\") pod \"22114ac5-b236-4e7c-ba0a-5703373937b2\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.301959 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjpcs\" (UniqueName: \"kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs\") pod \"22114ac5-b236-4e7c-ba0a-5703373937b2\" (UID: \"22114ac5-b236-4e7c-ba0a-5703373937b2\") " Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.303123 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "22114ac5-b236-4e7c-ba0a-5703373937b2" (UID: "22114ac5-b236-4e7c-ba0a-5703373937b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.309435 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22114ac5-b236-4e7c-ba0a-5703373937b2" (UID: "22114ac5-b236-4e7c-ba0a-5703373937b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.310301 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs" (OuterVolumeSpecName: "kube-api-access-bjpcs") pod "22114ac5-b236-4e7c-ba0a-5703373937b2" (UID: "22114ac5-b236-4e7c-ba0a-5703373937b2"). InnerVolumeSpecName "kube-api-access-bjpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.403126 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22114ac5-b236-4e7c-ba0a-5703373937b2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.403173 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjpcs\" (UniqueName: \"kubernetes.io/projected/22114ac5-b236-4e7c-ba0a-5703373937b2-kube-api-access-bjpcs\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:03 crc kubenswrapper[4679]: I0203 12:15:03.403188 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22114ac5-b236-4e7c-ba0a-5703373937b2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:15:04 crc kubenswrapper[4679]: I0203 12:15:04.007856 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" event={"ID":"22114ac5-b236-4e7c-ba0a-5703373937b2","Type":"ContainerDied","Data":"0a44d162f4d3e912bbdcc78e55e5d4fa1056d56fd4713a8127ea7ef648061c88"} Feb 03 12:15:04 crc kubenswrapper[4679]: I0203 12:15:04.007903 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a44d162f4d3e912bbdcc78e55e5d4fa1056d56fd4713a8127ea7ef648061c88" Feb 03 12:15:04 crc kubenswrapper[4679]: I0203 12:15:04.007980 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd" Feb 03 12:15:06 crc kubenswrapper[4679]: I0203 12:15:06.736150 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:15:06 crc kubenswrapper[4679]: I0203 12:15:06.736491 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:15:28 crc kubenswrapper[4679]: I0203 12:15:28.616596 4679 scope.go:117] "RemoveContainer" containerID="9a3ffc4402381fbd9b59003415773acf5e5a53706bc88385007bb2283b2092ee" Feb 03 12:15:28 crc kubenswrapper[4679]: I0203 12:15:28.638710 4679 scope.go:117] "RemoveContainer" containerID="cae5fcb48fcdd26431ca353410a02b37732d3da74315d0dedc7493366197ef58" Feb 03 12:15:36 crc kubenswrapper[4679]: I0203 12:15:36.735517 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:15:36 crc kubenswrapper[4679]: I0203 12:15:36.736078 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:16:06 crc kubenswrapper[4679]: I0203 12:16:06.736326 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:16:06 crc kubenswrapper[4679]: I0203 12:16:06.737325 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:16:06 crc kubenswrapper[4679]: I0203 12:16:06.737422 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:16:06 crc kubenswrapper[4679]: I0203 12:16:06.738174 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:16:06 crc kubenswrapper[4679]: I0203 12:16:06.738253 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19" gracePeriod=600 Feb 03 12:16:07 crc kubenswrapper[4679]: I0203 12:16:07.403037 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19" exitCode=0 Feb 03 12:16:07 crc kubenswrapper[4679]: I0203 12:16:07.403124 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19"} Feb 03 12:16:07 crc kubenswrapper[4679]: I0203 12:16:07.403459 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c"} Feb 03 12:16:07 crc kubenswrapper[4679]: I0203 12:16:07.403484 4679 scope.go:117] "RemoveContainer" containerID="196dda581caf52f44c16f6535475949177d645bc243bf81cdb6ae8ae7bf82aee" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.535387 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf"] Feb 03 12:17:45 crc kubenswrapper[4679]: E0203 12:17:45.536261 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22114ac5-b236-4e7c-ba0a-5703373937b2" containerName="collect-profiles" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.536282 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="22114ac5-b236-4e7c-ba0a-5703373937b2" containerName="collect-profiles" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.536432 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="22114ac5-b236-4e7c-ba0a-5703373937b2" containerName="collect-profiles" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.537010 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.540007 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.540330 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.543801 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-j5c2m"] Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.544753 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-j5c2m" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.545675 4679 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sf6r6" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.547853 4679 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7rk2x" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.558864 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf"] Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.581694 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xxlm9"] Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.582597 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.583473 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-j5c2m"] Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.586078 4679 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dlsmk" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.599467 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xxlm9"] Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.599553 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vxx\" (UniqueName: \"kubernetes.io/projected/dc20b5f8-7353-4785-ac36-1f263f60b102-kube-api-access-55vxx\") pod \"cert-manager-858654f9db-j5c2m\" (UID: \"dc20b5f8-7353-4785-ac36-1f263f60b102\") " pod="cert-manager/cert-manager-858654f9db-j5c2m" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.599951 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcth\" (UniqueName: \"kubernetes.io/projected/1934112b-b7de-4e8a-a94c-696e9a9412cd-kube-api-access-fkcth\") pod \"cert-manager-cainjector-cf98fcc89-m6rvf\" (UID: \"1934112b-b7de-4e8a-a94c-696e9a9412cd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.600203 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8zm\" (UniqueName: \"kubernetes.io/projected/bcb6f977-4961-473e-afe5-be2b055270e6-kube-api-access-dt8zm\") pod \"cert-manager-webhook-687f57d79b-xxlm9\" (UID: \"bcb6f977-4961-473e-afe5-be2b055270e6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.701404 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8zm\" (UniqueName: \"kubernetes.io/projected/bcb6f977-4961-473e-afe5-be2b055270e6-kube-api-access-dt8zm\") pod \"cert-manager-webhook-687f57d79b-xxlm9\" (UID: \"bcb6f977-4961-473e-afe5-be2b055270e6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.701467 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vxx\" (UniqueName: \"kubernetes.io/projected/dc20b5f8-7353-4785-ac36-1f263f60b102-kube-api-access-55vxx\") pod \"cert-manager-858654f9db-j5c2m\" (UID: \"dc20b5f8-7353-4785-ac36-1f263f60b102\") " pod="cert-manager/cert-manager-858654f9db-j5c2m" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.701495 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkcth\" (UniqueName: \"kubernetes.io/projected/1934112b-b7de-4e8a-a94c-696e9a9412cd-kube-api-access-fkcth\") pod \"cert-manager-cainjector-cf98fcc89-m6rvf\" (UID: \"1934112b-b7de-4e8a-a94c-696e9a9412cd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.730241 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vxx\" (UniqueName: \"kubernetes.io/projected/dc20b5f8-7353-4785-ac36-1f263f60b102-kube-api-access-55vxx\") pod \"cert-manager-858654f9db-j5c2m\" (UID: \"dc20b5f8-7353-4785-ac36-1f263f60b102\") " pod="cert-manager/cert-manager-858654f9db-j5c2m" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.730894 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkcth\" (UniqueName: \"kubernetes.io/projected/1934112b-b7de-4e8a-a94c-696e9a9412cd-kube-api-access-fkcth\") pod \"cert-manager-cainjector-cf98fcc89-m6rvf\" (UID: \"1934112b-b7de-4e8a-a94c-696e9a9412cd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.733782 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8zm\" (UniqueName: \"kubernetes.io/projected/bcb6f977-4961-473e-afe5-be2b055270e6-kube-api-access-dt8zm\") pod \"cert-manager-webhook-687f57d79b-xxlm9\" (UID: \"bcb6f977-4961-473e-afe5-be2b055270e6\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.862189 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.871015 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-j5c2m" Feb 03 12:17:45 crc kubenswrapper[4679]: I0203 12:17:45.900550 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:46 crc kubenswrapper[4679]: I0203 12:17:46.132275 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-j5c2m"] Feb 03 12:17:46 crc kubenswrapper[4679]: I0203 12:17:46.139324 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:17:46 crc kubenswrapper[4679]: I0203 12:17:46.205331 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xxlm9"] Feb 03 12:17:46 crc kubenswrapper[4679]: W0203 12:17:46.212776 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb6f977_4961_473e_afe5_be2b055270e6.slice/crio-2011e30acc859bcdc49ccf495cbd64cde28c26f072948424335d8f1220fadf08 WatchSource:0}: Error finding container 2011e30acc859bcdc49ccf495cbd64cde28c26f072948424335d8f1220fadf08: Status 404 returned error can't find the container with id 2011e30acc859bcdc49ccf495cbd64cde28c26f072948424335d8f1220fadf08 Feb 03 12:17:46 crc kubenswrapper[4679]: I0203 12:17:46.291791 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf"] Feb 03 12:17:46 crc kubenswrapper[4679]: W0203 12:17:46.292486 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1934112b_b7de_4e8a_a94c_696e9a9412cd.slice/crio-e4a5740eac9911513b1c652a7efa711c28b3946c6eab81789d4c9112afc9a0b5 WatchSource:0}: Error finding container e4a5740eac9911513b1c652a7efa711c28b3946c6eab81789d4c9112afc9a0b5: Status 404 returned error can't find the container with id e4a5740eac9911513b1c652a7efa711c28b3946c6eab81789d4c9112afc9a0b5 Feb 03 12:17:47 crc kubenswrapper[4679]: I0203 12:17:47.031780 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" event={"ID":"bcb6f977-4961-473e-afe5-be2b055270e6","Type":"ContainerStarted","Data":"2011e30acc859bcdc49ccf495cbd64cde28c26f072948424335d8f1220fadf08"} Feb 03 12:17:47 crc kubenswrapper[4679]: I0203 12:17:47.033250 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" event={"ID":"1934112b-b7de-4e8a-a94c-696e9a9412cd","Type":"ContainerStarted","Data":"e4a5740eac9911513b1c652a7efa711c28b3946c6eab81789d4c9112afc9a0b5"} Feb 03 12:17:47 crc kubenswrapper[4679]: I0203 12:17:47.034560 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-j5c2m" event={"ID":"dc20b5f8-7353-4785-ac36-1f263f60b102","Type":"ContainerStarted","Data":"f4c14d82f6ef17335da160a42e1fbf93ea86dfa0f929670ed82d933d28d74b62"} Feb 03 12:17:52 crc kubenswrapper[4679]: I0203 12:17:52.145288 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" event={"ID":"bcb6f977-4961-473e-afe5-be2b055270e6","Type":"ContainerStarted","Data":"d6f4323adc37f9807d68e4e3634f9f237c4f78412a68b67ce0bf1f96f98525e4"} Feb 03 12:17:52 crc kubenswrapper[4679]: I0203 12:17:52.145924 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:17:52 crc kubenswrapper[4679]: I0203 12:17:52.170397 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" podStartSLOduration=2.783757891 podStartE2EDuration="7.170374945s" podCreationTimestamp="2026-02-03 12:17:45 +0000 UTC" firstStartedPulling="2026-02-03 12:17:46.21457556 +0000 UTC m=+738.689471648" lastFinishedPulling="2026-02-03 12:17:50.601192614 +0000 UTC m=+743.076088702" observedRunningTime="2026-02-03 12:17:52.164215363 +0000 UTC m=+744.639111461" watchObservedRunningTime="2026-02-03 12:17:52.170374945 +0000 UTC m=+744.645271023" Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.034061 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7ws5"] Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035387 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-controller" containerID="cri-o://a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035426 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="sbdb" containerID="cri-o://b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035426 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035566 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-node" containerID="cri-o://3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035623 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="nbdb" containerID="cri-o://e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035626 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-acl-logging" containerID="cri-o://3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.035681 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="northd" containerID="cri-o://d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa" gracePeriod=30 Feb 03 12:17:55 crc kubenswrapper[4679]: I0203 12:17:55.074660 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" containerID="cri-o://94e6a70bd4da159c97ae0c870f9413fed3101f98bc4371806ba39bc586b88a66" gracePeriod=30 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.186714 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/2.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.190470 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/1.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.190561 4679 generic.go:334] "Generic (PLEG): container finished" podID="413e7c7d-7c01-4502-8d73-3c3df2e60956" containerID="f734d03952e6546980c7e8006be19bad9093b7855a66f5543811cbe8f0ff2a53" exitCode=2 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.190698 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerDied","Data":"f734d03952e6546980c7e8006be19bad9093b7855a66f5543811cbe8f0ff2a53"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.191086 4679 scope.go:117] "RemoveContainer" containerID="f3703d81974e8264b74ab7340bc6312ee3a8cc64ae28ca4f7c7f0d9ed2b2827c" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.192307 4679 scope.go:117] "RemoveContainer" containerID="f734d03952e6546980c7e8006be19bad9093b7855a66f5543811cbe8f0ff2a53" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.195008 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" event={"ID":"1934112b-b7de-4e8a-a94c-696e9a9412cd","Type":"ContainerStarted","Data":"55095731dd3b31a48c86484f5cc9682bded03e4af566c9fcafe1c8c67f05e1d1"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.198987 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/3.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.201480 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-acl-logging/0.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.202420 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-controller/0.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.202993 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="94e6a70bd4da159c97ae0c870f9413fed3101f98bc4371806ba39bc586b88a66" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203080 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203132 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203184 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203239 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203306 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc" exitCode=0 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203416 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425" exitCode=143 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203489 4679 generic.go:334] "Generic (PLEG): container finished" podID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerID="a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3" exitCode=143 Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203570 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"94e6a70bd4da159c97ae0c870f9413fed3101f98bc4371806ba39bc586b88a66"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203673 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203753 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203827 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203896 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.203970 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.204123 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.204201 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3"} Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.234526 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-m6rvf" podStartSLOduration=2.724346249 podStartE2EDuration="11.234506168s" podCreationTimestamp="2026-02-03 12:17:45 +0000 UTC" firstStartedPulling="2026-02-03 12:17:46.29482251 +0000 UTC m=+738.769718598" lastFinishedPulling="2026-02-03 12:17:54.804982429 +0000 UTC m=+747.279878517" observedRunningTime="2026-02-03 12:17:56.232974597 +0000 UTC m=+748.707870685" watchObservedRunningTime="2026-02-03 12:17:56.234506168 +0000 UTC m=+748.709402256" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.251703 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovnkube-controller/3.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.266207 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-acl-logging/0.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.269225 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-controller/0.log" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.270726 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330015 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hsxwp"] Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330288 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kubecfg-setup" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330310 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kubecfg-setup" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330325 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330333 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330341 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-acl-logging" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330351 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-acl-logging" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330378 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330387 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330396 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="sbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330403 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="sbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330414 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330421 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330436 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-node" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330442 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-node" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330450 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="northd" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330457 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="northd" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330468 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330474 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330482 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330489 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330501 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="nbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330507 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="nbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330627 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330638 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="northd" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330645 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-acl-logging" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330657 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330665 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330673 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330681 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330689 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="kube-rbac-proxy-node" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330698 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovn-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330708 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="sbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330716 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="nbdb" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330808 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330815 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: E0203 12:17:56.330823 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330830 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.330941 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" containerName="ovnkube-controller" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.332813 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.337994 4679 scope.go:117] "RemoveContainer" containerID="bc4f1f63799b26d13bde91aef92f6009fe19fbdf9377ba52025254344014b640" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.406986 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407058 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xhpj\" (UniqueName: \"kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407137 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407244 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407295 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log" (OuterVolumeSpecName: "node-log") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407707 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407762 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407805 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407826 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407850 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407889 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407900 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket" (OuterVolumeSpecName: "log-socket") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407891 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407935 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407970 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.407989 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408018 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408025 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408034 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408070 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408019 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408103 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408138 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408053 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408073 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408163 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408182 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408207 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408228 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408240 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408274 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408323 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides\") pod \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\" (UID: \"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa\") " Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408248 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash" (OuterVolumeSpecName: "host-slash") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408463 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408470 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408585 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-etc-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408590 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408620 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-script-lib\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408698 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408727 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-bin\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408746 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408781 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-systemd-units\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408814 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-env-overrides\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408833 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovn-node-metrics-cert\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408903 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-ovn\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408930 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-log-socket\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.408988 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-systemd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409018 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-var-lib-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409035 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-netd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409080 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-kubelet\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409105 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltb7n\" (UniqueName: \"kubernetes.io/projected/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-kube-api-access-ltb7n\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409151 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409175 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-netns\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409191 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-config\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409222 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-node-log\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409243 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409273 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-slash\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409331 4679 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409345 4679 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409373 4679 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409384 4679 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409396 4679 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409406 4679 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409490 4679 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409523 4679 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409540 4679 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409554 4679 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409578 4679 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409590 4679 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409599 4679 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409607 4679 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409615 4679 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409626 4679 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.409636 4679 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.415242 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.416907 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj" (OuterVolumeSpecName: "kube-api-access-6xhpj") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "kube-api-access-6xhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.423313 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" (UID: "b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511402 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-kubelet\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511481 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltb7n\" (UniqueName: \"kubernetes.io/projected/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-kube-api-access-ltb7n\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511526 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511549 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-netns\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511570 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-config\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511599 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-node-log\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511606 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-kubelet\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511683 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511625 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511688 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-netns\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511756 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-slash\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511791 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511812 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-etc-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511837 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-script-lib\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511838 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-slash\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511876 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-bin\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511900 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511906 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-node-log\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511943 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-bin\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511960 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-systemd-units\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511984 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-env-overrides\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512005 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovn-node-metrics-cert\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512065 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-ovn\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512096 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-log-socket\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512115 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-systemd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512143 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-var-lib-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512162 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-netd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512259 4679 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512272 4679 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512287 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xhpj\" (UniqueName: \"kubernetes.io/projected/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa-kube-api-access-6xhpj\") on node \"crc\" DevicePath \"\"" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512321 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-cni-netd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.511877 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-etc-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512375 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512399 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-systemd-units\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512535 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-systemd\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512535 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-log-socket\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512640 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-run-ovn\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512658 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-config\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512685 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-var-lib-openvswitch\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.512924 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-env-overrides\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.513057 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovnkube-script-lib\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.516046 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-ovn-node-metrics-cert\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.532962 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltb7n\" (UniqueName: \"kubernetes.io/projected/e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6-kube-api-access-ltb7n\") pod \"ovnkube-node-hsxwp\" (UID: \"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: I0203 12:17:56.773481 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:17:56 crc kubenswrapper[4679]: W0203 12:17:56.802946 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ca66bd_5eec_4d90_ae22_9f5b28d1c8e6.slice/crio-bfd9d5454e6712373c9c50911b3a956323a259e520b0af3c1db87234886bd0e5 WatchSource:0}: Error finding container bfd9d5454e6712373c9c50911b3a956323a259e520b0af3c1db87234886bd0e5: Status 404 returned error can't find the container with id bfd9d5454e6712373c9c50911b3a956323a259e520b0af3c1db87234886bd0e5 Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.216451 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-j5c2m" event={"ID":"dc20b5f8-7353-4785-ac36-1f263f60b102","Type":"ContainerStarted","Data":"b32bd0e9e15d6d6329c9c988fdefe143c607c4d58541f8788ae6046a3505f576"} Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.219693 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"8dea4832f9a8da41e2015248685c2c6edad1c006da79191fb8da26bf962d52b0"} Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.219737 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"bfd9d5454e6712373c9c50911b3a956323a259e520b0af3c1db87234886bd0e5"} Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.225125 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-acl-logging/0.log" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.225780 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b7ws5_b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/ovn-controller/0.log" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.226332 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" event={"ID":"b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa","Type":"ContainerDied","Data":"1a98b8bcae840432f85cf6a5750c3c3e8904fc56c9152cf740898c2d4ce9dd10"} Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.226395 4679 scope.go:117] "RemoveContainer" containerID="94e6a70bd4da159c97ae0c870f9413fed3101f98bc4371806ba39bc586b88a66" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.226553 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b7ws5" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.230608 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zqm7_413e7c7d-7c01-4502-8d73-3c3df2e60956/kube-multus/2.log" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.230744 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zqm7" event={"ID":"413e7c7d-7c01-4502-8d73-3c3df2e60956","Type":"ContainerStarted","Data":"29fd5a02482148ea5a904d321e2d70992ce7fa94b1b29183b71b830b7a4a9afb"} Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.241151 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-j5c2m" podStartSLOduration=2.114344544 podStartE2EDuration="12.241127692s" podCreationTimestamp="2026-02-03 12:17:45 +0000 UTC" firstStartedPulling="2026-02-03 12:17:46.139081724 +0000 UTC m=+738.613977812" lastFinishedPulling="2026-02-03 12:17:56.265864872 +0000 UTC m=+748.740760960" observedRunningTime="2026-02-03 12:17:57.233379459 +0000 UTC m=+749.708275547" watchObservedRunningTime="2026-02-03 12:17:57.241127692 +0000 UTC m=+749.716023780" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.324846 4679 scope.go:117] "RemoveContainer" containerID="b0c1d7df13408cc2a904d32fd417cdfa76c2b620b655d5f3c7ad529a3823ff00" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.343215 4679 scope.go:117] "RemoveContainer" containerID="e42680c494ac4b829d2c5d66ebbed76bf5697092edf626851a7a9ea4306d0ee0" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.346993 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7ws5"] Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.354239 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b7ws5"] Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.364118 4679 scope.go:117] "RemoveContainer" containerID="d02df28301be9d02d1a28ccf5a669138478ea951a1549eab48a8a97c752151fa" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.379127 4679 scope.go:117] "RemoveContainer" containerID="c8cc84223fdb038cc1a7be329f7d563000e69964a7840c521dbb94bb798cb9b7" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.398229 4679 scope.go:117] "RemoveContainer" containerID="3b10c3182112386dc9b589091e408179882038e9f1c0e7a5c3929b25bb25bfdc" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.422595 4679 scope.go:117] "RemoveContainer" containerID="3d3a35cebd1f1331a0a898246767ba49c946886dd874ce1405564ad8348db425" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.440741 4679 scope.go:117] "RemoveContainer" containerID="a879a6141c2793a1a044f57eb3ece3091c3ec7b52de4c79193a8b92f6afe02c3" Feb 03 12:17:57 crc kubenswrapper[4679]: I0203 12:17:57.461452 4679 scope.go:117] "RemoveContainer" containerID="9f40e90f4f5c50463d6bdc53abdc9e673c15c1fb2b17efc11ed90d3b6f49291f" Feb 03 12:17:58 crc kubenswrapper[4679]: I0203 12:17:58.232665 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa" path="/var/lib/kubelet/pods/b2691dd3-b30a-4cf3-8bff-1d84cb36b3fa/volumes" Feb 03 12:17:58 crc kubenswrapper[4679]: I0203 12:17:58.237113 4679 generic.go:334] "Generic (PLEG): container finished" podID="e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6" containerID="8dea4832f9a8da41e2015248685c2c6edad1c006da79191fb8da26bf962d52b0" exitCode=0 Feb 03 12:17:58 crc kubenswrapper[4679]: I0203 12:17:58.237186 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerDied","Data":"8dea4832f9a8da41e2015248685c2c6edad1c006da79191fb8da26bf962d52b0"} Feb 03 12:17:59 crc kubenswrapper[4679]: I0203 12:17:59.255814 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"8200b0b043dd8dbedd3bfd3ad4852d5303955406066ac3a7a74c5a5807c8f723"} Feb 03 12:17:59 crc kubenswrapper[4679]: I0203 12:17:59.256481 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"4e7815aeb5d0780d411515014726f090f4f1fede926446376b2a4f895b319152"} Feb 03 12:17:59 crc kubenswrapper[4679]: I0203 12:17:59.256497 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"b3d7d04f01e4f2a9f104a5f5bb7bd5f856d5e153af2618a55520426f3151a060"} Feb 03 12:17:59 crc kubenswrapper[4679]: I0203 12:17:59.256512 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"72694cea1f53e516c7874c12dbd5628a95e73b63290b06673f60a07d9eb536cd"} Feb 03 12:18:00 crc kubenswrapper[4679]: I0203 12:18:00.267208 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"f5576ccc496aec35b5d7fb3852cf30536c2c80b392b9d850ab1ca3c3b0e9fd42"} Feb 03 12:18:00 crc kubenswrapper[4679]: I0203 12:18:00.267256 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"42e9f5b7260c9f8ca84439632cfeff8f481be68212ba319a4d99f3893d7b5809"} Feb 03 12:18:00 crc kubenswrapper[4679]: I0203 12:18:00.904451 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xxlm9" Feb 03 12:18:02 crc kubenswrapper[4679]: I0203 12:18:02.283871 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"08a9a17131bd8f9a121df882af258891c3de37283cc4054df8d8b5e08ee48cbe"} Feb 03 12:18:04 crc kubenswrapper[4679]: I0203 12:18:04.304973 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" event={"ID":"e0ca66bd-5eec-4d90-ae22-9f5b28d1c8e6","Type":"ContainerStarted","Data":"087f1a6d88fa7f68d06695ba565e8d8a40d605efa12aefd508e795a8b9ac06e4"} Feb 03 12:18:04 crc kubenswrapper[4679]: I0203 12:18:04.306473 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:04 crc kubenswrapper[4679]: I0203 12:18:04.306505 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:04 crc kubenswrapper[4679]: I0203 12:18:04.336687 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:04 crc kubenswrapper[4679]: I0203 12:18:04.343543 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" podStartSLOduration=8.343520144 podStartE2EDuration="8.343520144s" podCreationTimestamp="2026-02-03 12:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:18:04.33727056 +0000 UTC m=+756.812166648" watchObservedRunningTime="2026-02-03 12:18:04.343520144 +0000 UTC m=+756.818416232" Feb 03 12:18:05 crc kubenswrapper[4679]: I0203 12:18:05.310670 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:05 crc kubenswrapper[4679]: I0203 12:18:05.348277 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:06 crc kubenswrapper[4679]: I0203 12:18:06.128076 4679 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 12:18:26 crc kubenswrapper[4679]: I0203 12:18:26.802043 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hsxwp" Feb 03 12:18:36 crc kubenswrapper[4679]: I0203 12:18:36.735343 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:18:36 crc kubenswrapper[4679]: I0203 12:18:36.735940 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.097943 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd"] Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.099577 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.102040 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.108271 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd"] Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.181957 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7frd\" (UniqueName: \"kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.182062 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.182208 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.284067 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7frd\" (UniqueName: \"kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.284188 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.284216 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.284879 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.285620 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.303328 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7frd\" (UniqueName: \"kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.414680 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:47 crc kubenswrapper[4679]: I0203 12:18:47.649393 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd"] Feb 03 12:18:48 crc kubenswrapper[4679]: I0203 12:18:48.571935 4679 generic.go:334] "Generic (PLEG): container finished" podID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerID="ee5e0384a7eb5e2b669da13ea2c39971f5fccebb9649dcbbff7f94d977d82f8f" exitCode=0 Feb 03 12:18:48 crc kubenswrapper[4679]: I0203 12:18:48.572063 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" event={"ID":"e945486d-e54e-4fab-a0a2-5564e08ce31c","Type":"ContainerDied","Data":"ee5e0384a7eb5e2b669da13ea2c39971f5fccebb9649dcbbff7f94d977d82f8f"} Feb 03 12:18:48 crc kubenswrapper[4679]: I0203 12:18:48.572341 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" event={"ID":"e945486d-e54e-4fab-a0a2-5564e08ce31c","Type":"ContainerStarted","Data":"7a28b957f2284570c6d194049061440bfbb9589edd67e9075c18d8173d462002"} Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.157501 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.158756 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.170825 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.211295 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.211468 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.211516 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzvv\" (UniqueName: \"kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.312642 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.312741 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzvv\" (UniqueName: \"kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.312817 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.313408 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.313454 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.343624 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzvv\" (UniqueName: \"kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv\") pod \"redhat-operators-v9dl9\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.485793 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:49 crc kubenswrapper[4679]: I0203 12:18:49.753833 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:18:50 crc kubenswrapper[4679]: I0203 12:18:50.593993 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d61e086-16c5-4499-9834-182fe7561c0f" containerID="2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67" exitCode=0 Feb 03 12:18:50 crc kubenswrapper[4679]: I0203 12:18:50.594085 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerDied","Data":"2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67"} Feb 03 12:18:50 crc kubenswrapper[4679]: I0203 12:18:50.594594 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerStarted","Data":"9e2a9046b84df8e70c7ca6c18efad555d66a21389eb12d742a7cf0d9cc76d294"} Feb 03 12:18:50 crc kubenswrapper[4679]: I0203 12:18:50.597191 4679 generic.go:334] "Generic (PLEG): container finished" podID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerID="a4fafe398c711150bba74891919a6df7bd8eae567e4699e59dd917eda0b00d5a" exitCode=0 Feb 03 12:18:50 crc kubenswrapper[4679]: I0203 12:18:50.597223 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" event={"ID":"e945486d-e54e-4fab-a0a2-5564e08ce31c","Type":"ContainerDied","Data":"a4fafe398c711150bba74891919a6df7bd8eae567e4699e59dd917eda0b00d5a"} Feb 03 12:18:51 crc kubenswrapper[4679]: I0203 12:18:51.606719 4679 generic.go:334] "Generic (PLEG): container finished" podID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerID="7ed8da69d6ff3b0243985f1525ccfac4a42be9ab20a3974c88ea30904525f73f" exitCode=0 Feb 03 12:18:51 crc kubenswrapper[4679]: I0203 12:18:51.606806 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" event={"ID":"e945486d-e54e-4fab-a0a2-5564e08ce31c","Type":"ContainerDied","Data":"7ed8da69d6ff3b0243985f1525ccfac4a42be9ab20a3974c88ea30904525f73f"} Feb 03 12:18:51 crc kubenswrapper[4679]: I0203 12:18:51.609849 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerStarted","Data":"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244"} Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.617105 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d61e086-16c5-4499-9834-182fe7561c0f" containerID="38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244" exitCode=0 Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.617196 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerDied","Data":"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244"} Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.821702 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.864713 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7frd\" (UniqueName: \"kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd\") pod \"e945486d-e54e-4fab-a0a2-5564e08ce31c\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.864785 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle\") pod \"e945486d-e54e-4fab-a0a2-5564e08ce31c\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.864921 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util\") pod \"e945486d-e54e-4fab-a0a2-5564e08ce31c\" (UID: \"e945486d-e54e-4fab-a0a2-5564e08ce31c\") " Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.867226 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle" (OuterVolumeSpecName: "bundle") pod "e945486d-e54e-4fab-a0a2-5564e08ce31c" (UID: "e945486d-e54e-4fab-a0a2-5564e08ce31c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.872315 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd" (OuterVolumeSpecName: "kube-api-access-m7frd") pod "e945486d-e54e-4fab-a0a2-5564e08ce31c" (UID: "e945486d-e54e-4fab-a0a2-5564e08ce31c"). InnerVolumeSpecName "kube-api-access-m7frd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.886587 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util" (OuterVolumeSpecName: "util") pod "e945486d-e54e-4fab-a0a2-5564e08ce31c" (UID: "e945486d-e54e-4fab-a0a2-5564e08ce31c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.966817 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7frd\" (UniqueName: \"kubernetes.io/projected/e945486d-e54e-4fab-a0a2-5564e08ce31c-kube-api-access-m7frd\") on node \"crc\" DevicePath \"\"" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.966870 4679 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:18:52 crc kubenswrapper[4679]: I0203 12:18:52.966886 4679 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e945486d-e54e-4fab-a0a2-5564e08ce31c-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.036692 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" event={"ID":"e945486d-e54e-4fab-a0a2-5564e08ce31c","Type":"ContainerDied","Data":"7a28b957f2284570c6d194049061440bfbb9589edd67e9075c18d8173d462002"} Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.036756 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a28b957f2284570c6d194049061440bfbb9589edd67e9075c18d8173d462002" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.036961 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.928470 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l95cn"] Feb 03 12:18:54 crc kubenswrapper[4679]: E0203 12:18:54.929064 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="util" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.929082 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="util" Feb 03 12:18:54 crc kubenswrapper[4679]: E0203 12:18:54.929096 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="pull" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.929103 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="pull" Feb 03 12:18:54 crc kubenswrapper[4679]: E0203 12:18:54.929114 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="extract" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.929125 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="extract" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.929261 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e945486d-e54e-4fab-a0a2-5564e08ce31c" containerName="extract" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.929716 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.931895 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9mfz6" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.932171 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.937770 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 12:18:54 crc kubenswrapper[4679]: I0203 12:18:54.947472 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l95cn"] Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.021345 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28nw\" (UniqueName: \"kubernetes.io/projected/ad475107-250b-403a-8563-b90f107e4f89-kube-api-access-c28nw\") pod \"nmstate-operator-646758c888-l95cn\" (UID: \"ad475107-250b-403a-8563-b90f107e4f89\") " pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.045535 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerStarted","Data":"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04"} Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.061631 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9dl9" podStartSLOduration=2.6137139940000003 podStartE2EDuration="6.061602757s" podCreationTimestamp="2026-02-03 12:18:49 +0000 UTC" firstStartedPulling="2026-02-03 12:18:50.596077874 +0000 UTC m=+803.070973962" lastFinishedPulling="2026-02-03 12:18:54.043966637 +0000 UTC m=+806.518862725" observedRunningTime="2026-02-03 12:18:55.061187357 +0000 UTC m=+807.536083455" watchObservedRunningTime="2026-02-03 12:18:55.061602757 +0000 UTC m=+807.536498845" Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.122479 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28nw\" (UniqueName: \"kubernetes.io/projected/ad475107-250b-403a-8563-b90f107e4f89-kube-api-access-c28nw\") pod \"nmstate-operator-646758c888-l95cn\" (UID: \"ad475107-250b-403a-8563-b90f107e4f89\") " pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.142718 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28nw\" (UniqueName: \"kubernetes.io/projected/ad475107-250b-403a-8563-b90f107e4f89-kube-api-access-c28nw\") pod \"nmstate-operator-646758c888-l95cn\" (UID: \"ad475107-250b-403a-8563-b90f107e4f89\") " pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.245120 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" Feb 03 12:18:55 crc kubenswrapper[4679]: I0203 12:18:55.511663 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l95cn"] Feb 03 12:18:56 crc kubenswrapper[4679]: I0203 12:18:56.051556 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" event={"ID":"ad475107-250b-403a-8563-b90f107e4f89","Type":"ContainerStarted","Data":"77a2ba9f476c5216f97f1c65e4a534d0a5e392ee4d6ef266268686df98604625"} Feb 03 12:18:59 crc kubenswrapper[4679]: I0203 12:18:59.068724 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" event={"ID":"ad475107-250b-403a-8563-b90f107e4f89","Type":"ContainerStarted","Data":"9ca9d3dd7ff876977aa4f7671f25e20c8b03a03d52c6421991dbf54695a39b77"} Feb 03 12:18:59 crc kubenswrapper[4679]: I0203 12:18:59.087462 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-l95cn" podStartSLOduration=2.615589306 podStartE2EDuration="5.087436459s" podCreationTimestamp="2026-02-03 12:18:54 +0000 UTC" firstStartedPulling="2026-02-03 12:18:55.528908472 +0000 UTC m=+808.003804560" lastFinishedPulling="2026-02-03 12:18:58.000755615 +0000 UTC m=+810.475651713" observedRunningTime="2026-02-03 12:18:59.085880112 +0000 UTC m=+811.560776210" watchObservedRunningTime="2026-02-03 12:18:59.087436459 +0000 UTC m=+811.562332557" Feb 03 12:18:59 crc kubenswrapper[4679]: I0203 12:18:59.486797 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:18:59 crc kubenswrapper[4679]: I0203 12:18:59.487149 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.060170 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-n5w4n"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.061627 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.063785 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rpzvq" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.083765 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-n5w4n"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.087114 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.088035 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.090308 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.097790 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvxrj\" (UniqueName: \"kubernetes.io/projected/5d978da5-5322-40f9-a7ea-c7dd2295874f-kube-api-access-rvxrj\") pod \"nmstate-metrics-54757c584b-n5w4n\" (UID: \"5d978da5-5322-40f9-a7ea-c7dd2295874f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.105193 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-s6vrk"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.106275 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.131449 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.199692 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5pj6\" (UniqueName: \"kubernetes.io/projected/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-kube-api-access-r5pj6\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200141 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200260 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-dbus-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200393 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvxrj\" (UniqueName: \"kubernetes.io/projected/5d978da5-5322-40f9-a7ea-c7dd2295874f-kube-api-access-rvxrj\") pod \"nmstate-metrics-54757c584b-n5w4n\" (UID: \"5d978da5-5322-40f9-a7ea-c7dd2295874f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200517 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2j4\" (UniqueName: \"kubernetes.io/projected/4a9f66b5-a4ee-40b5-95cf-159557632d17-kube-api-access-8m2j4\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200684 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-ovs-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.200821 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-nmstate-lock\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.225295 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvxrj\" (UniqueName: \"kubernetes.io/projected/5d978da5-5322-40f9-a7ea-c7dd2295874f-kube-api-access-rvxrj\") pod \"nmstate-metrics-54757c584b-n5w4n\" (UID: \"5d978da5-5322-40f9-a7ea-c7dd2295874f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.277421 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.278926 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.281909 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.282174 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.282347 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-xx4ph" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.289212 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302042 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5p9p\" (UniqueName: \"kubernetes.io/projected/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-kube-api-access-f5p9p\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302114 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5pj6\" (UniqueName: \"kubernetes.io/projected/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-kube-api-access-r5pj6\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302146 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302172 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-dbus-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302207 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m2j4\" (UniqueName: \"kubernetes.io/projected/4a9f66b5-a4ee-40b5-95cf-159557632d17-kube-api-access-8m2j4\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302257 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-ovs-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302294 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: E0203 12:19:00.302313 4679 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302337 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-nmstate-lock\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: E0203 12:19:00.302399 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair podName:4a9f66b5-a4ee-40b5-95cf-159557632d17 nodeName:}" failed. No retries permitted until 2026-02-03 12:19:00.802371026 +0000 UTC m=+813.277267114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-5fck4" (UID: "4a9f66b5-a4ee-40b5-95cf-159557632d17") : secret "openshift-nmstate-webhook" not found Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302406 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-nmstate-lock\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302421 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302667 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-ovs-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.302807 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-dbus-socket\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.323208 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m2j4\" (UniqueName: \"kubernetes.io/projected/4a9f66b5-a4ee-40b5-95cf-159557632d17-kube-api-access-8m2j4\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.332539 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5pj6\" (UniqueName: \"kubernetes.io/projected/bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4-kube-api-access-r5pj6\") pod \"nmstate-handler-s6vrk\" (UID: \"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4\") " pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.380800 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.403868 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.403958 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5p9p\" (UniqueName: \"kubernetes.io/projected/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-kube-api-access-f5p9p\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.404059 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: E0203 12:19:00.404229 4679 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 12:19:00 crc kubenswrapper[4679]: E0203 12:19:00.404302 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert podName:ee8ef129-bd8a-4296-9ac9-8bad21434ec6 nodeName:}" failed. No retries permitted until 2026-02-03 12:19:00.904278242 +0000 UTC m=+813.379174330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-wgq45" (UID: "ee8ef129-bd8a-4296-9ac9-8bad21434ec6") : secret "plugin-serving-cert" not found Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.405558 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.424335 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.425192 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5p9p\" (UniqueName: \"kubernetes.io/projected/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-kube-api-access-f5p9p\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: W0203 12:19:00.473418 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7b6e88_5c1f_4d52_9876_a99fa1c3e6c4.slice/crio-039bd21c4850da69470e040f88aea2e927b32a7ecdb6a69b53a3512a866471c3 WatchSource:0}: Error finding container 039bd21c4850da69470e040f88aea2e927b32a7ecdb6a69b53a3512a866471c3: Status 404 returned error can't find the container with id 039bd21c4850da69470e040f88aea2e927b32a7ecdb6a69b53a3512a866471c3 Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.504841 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55994dd87b-dw7z5"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.506667 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.546829 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9dl9" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="registry-server" probeResult="failure" output=< Feb 03 12:19:00 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:19:00 crc kubenswrapper[4679]: > Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.565725 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55994dd87b-dw7z5"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614195 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-oauth-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614252 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-service-ca\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614286 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwc9m\" (UniqueName: \"kubernetes.io/projected/1d9763b1-a40a-49f3-a378-47c6231e6bce-kube-api-access-gwc9m\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614313 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-oauth-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614334 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-trusted-ca-bundle\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614348 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.614381 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716173 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwc9m\" (UniqueName: \"kubernetes.io/projected/1d9763b1-a40a-49f3-a378-47c6231e6bce-kube-api-access-gwc9m\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716254 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-oauth-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716290 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-trusted-ca-bundle\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716312 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716335 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716458 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-oauth-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.716479 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-service-ca\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.718686 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.718906 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-oauth-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.719690 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-trusted-ca-bundle\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.721194 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-n5w4n"] Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.722559 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9763b1-a40a-49f3-a378-47c6231e6bce-service-ca\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.727446 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-oauth-config\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.729537 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9763b1-a40a-49f3-a378-47c6231e6bce-console-serving-cert\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: W0203 12:19:00.731348 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d978da5_5322_40f9_a7ea_c7dd2295874f.slice/crio-86b1fff18b017b4a2a326ddca6136ce013dc8776037f0e2262fd51c08aeae1c9 WatchSource:0}: Error finding container 86b1fff18b017b4a2a326ddca6136ce013dc8776037f0e2262fd51c08aeae1c9: Status 404 returned error can't find the container with id 86b1fff18b017b4a2a326ddca6136ce013dc8776037f0e2262fd51c08aeae1c9 Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.738799 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwc9m\" (UniqueName: \"kubernetes.io/projected/1d9763b1-a40a-49f3-a378-47c6231e6bce-kube-api-access-gwc9m\") pod \"console-55994dd87b-dw7z5\" (UID: \"1d9763b1-a40a-49f3-a378-47c6231e6bce\") " pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.817903 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.821707 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4a9f66b5-a4ee-40b5-95cf-159557632d17-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-5fck4\" (UID: \"4a9f66b5-a4ee-40b5-95cf-159557632d17\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.830712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.925459 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:00 crc kubenswrapper[4679]: I0203 12:19:00.927455 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee8ef129-bd8a-4296-9ac9-8bad21434ec6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wgq45\" (UID: \"ee8ef129-bd8a-4296-9ac9-8bad21434ec6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.016297 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.056708 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55994dd87b-dw7z5"] Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.081037 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55994dd87b-dw7z5" event={"ID":"1d9763b1-a40a-49f3-a378-47c6231e6bce","Type":"ContainerStarted","Data":"4f77fad8660d4b77286c92c85d5fd37385dab2e207469e013b774ecd2250ca63"} Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.082414 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s6vrk" event={"ID":"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4","Type":"ContainerStarted","Data":"039bd21c4850da69470e040f88aea2e927b32a7ecdb6a69b53a3512a866471c3"} Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.083823 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" event={"ID":"5d978da5-5322-40f9-a7ea-c7dd2295874f","Type":"ContainerStarted","Data":"86b1fff18b017b4a2a326ddca6136ce013dc8776037f0e2262fd51c08aeae1c9"} Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.198886 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.235155 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4"] Feb 03 12:19:01 crc kubenswrapper[4679]: W0203 12:19:01.241808 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a9f66b5_a4ee_40b5_95cf_159557632d17.slice/crio-bf44dbb4028bcae357e8db75767630a38d372863c77c76aa3e847e1803ef0d89 WatchSource:0}: Error finding container bf44dbb4028bcae357e8db75767630a38d372863c77c76aa3e847e1803ef0d89: Status 404 returned error can't find the container with id bf44dbb4028bcae357e8db75767630a38d372863c77c76aa3e847e1803ef0d89 Feb 03 12:19:01 crc kubenswrapper[4679]: I0203 12:19:01.660722 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45"] Feb 03 12:19:01 crc kubenswrapper[4679]: W0203 12:19:01.669211 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee8ef129_bd8a_4296_9ac9_8bad21434ec6.slice/crio-c837e1ba71601267816909b6028b65135f15632a24311f22d10e0442d5ba26eb WatchSource:0}: Error finding container c837e1ba71601267816909b6028b65135f15632a24311f22d10e0442d5ba26eb: Status 404 returned error can't find the container with id c837e1ba71601267816909b6028b65135f15632a24311f22d10e0442d5ba26eb Feb 03 12:19:02 crc kubenswrapper[4679]: I0203 12:19:02.095227 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" event={"ID":"4a9f66b5-a4ee-40b5-95cf-159557632d17","Type":"ContainerStarted","Data":"bf44dbb4028bcae357e8db75767630a38d372863c77c76aa3e847e1803ef0d89"} Feb 03 12:19:02 crc kubenswrapper[4679]: I0203 12:19:02.097186 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55994dd87b-dw7z5" event={"ID":"1d9763b1-a40a-49f3-a378-47c6231e6bce","Type":"ContainerStarted","Data":"5bbea9f1851df18d2064e6db2af954405f21f9b878a88991b3e464665c79ffd7"} Feb 03 12:19:02 crc kubenswrapper[4679]: I0203 12:19:02.098581 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" event={"ID":"ee8ef129-bd8a-4296-9ac9-8bad21434ec6","Type":"ContainerStarted","Data":"c837e1ba71601267816909b6028b65135f15632a24311f22d10e0442d5ba26eb"} Feb 03 12:19:02 crc kubenswrapper[4679]: I0203 12:19:02.129604 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55994dd87b-dw7z5" podStartSLOduration=2.12957714 podStartE2EDuration="2.12957714s" podCreationTimestamp="2026-02-03 12:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:19:02.119251691 +0000 UTC m=+814.594147779" watchObservedRunningTime="2026-02-03 12:19:02.12957714 +0000 UTC m=+814.604473228" Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.117271 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" event={"ID":"4a9f66b5-a4ee-40b5-95cf-159557632d17","Type":"ContainerStarted","Data":"4c5077dcf32a643bdb8756398d13b170384eb207b1adbad1299e6d3b413dad21"} Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.119344 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.119528 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s6vrk" event={"ID":"bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4","Type":"ContainerStarted","Data":"c97b6ff3d06c98768237e205bf8d74f0ff9072ccd4653c4545679135def52c9e"} Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.119661 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.121235 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" event={"ID":"5d978da5-5322-40f9-a7ea-c7dd2295874f","Type":"ContainerStarted","Data":"21259b240d995df3f98e03e28eebbd8822262ea97e06253d1f685e446adc4680"} Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.140452 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" podStartSLOduration=2.1305040809999998 podStartE2EDuration="5.140416676s" podCreationTimestamp="2026-02-03 12:19:00 +0000 UTC" firstStartedPulling="2026-02-03 12:19:01.247259481 +0000 UTC m=+813.722155569" lastFinishedPulling="2026-02-03 12:19:04.257172076 +0000 UTC m=+816.732068164" observedRunningTime="2026-02-03 12:19:05.137562997 +0000 UTC m=+817.612459095" watchObservedRunningTime="2026-02-03 12:19:05.140416676 +0000 UTC m=+817.615312764" Feb 03 12:19:05 crc kubenswrapper[4679]: I0203 12:19:05.159864 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-s6vrk" podStartSLOduration=1.441316779 podStartE2EDuration="5.159845254s" podCreationTimestamp="2026-02-03 12:19:00 +0000 UTC" firstStartedPulling="2026-02-03 12:19:00.480335125 +0000 UTC m=+812.955231213" lastFinishedPulling="2026-02-03 12:19:04.1988636 +0000 UTC m=+816.673759688" observedRunningTime="2026-02-03 12:19:05.156392691 +0000 UTC m=+817.631288799" watchObservedRunningTime="2026-02-03 12:19:05.159845254 +0000 UTC m=+817.634741342" Feb 03 12:19:06 crc kubenswrapper[4679]: I0203 12:19:06.130186 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" event={"ID":"ee8ef129-bd8a-4296-9ac9-8bad21434ec6","Type":"ContainerStarted","Data":"375938a5faf7eb1c14d55eec0486615f8c4a21d5dd2a7b9083cb260e1914a02a"} Feb 03 12:19:06 crc kubenswrapper[4679]: I0203 12:19:06.159999 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wgq45" podStartSLOduration=2.631078578 podStartE2EDuration="6.159970322s" podCreationTimestamp="2026-02-03 12:19:00 +0000 UTC" firstStartedPulling="2026-02-03 12:19:01.671523648 +0000 UTC m=+814.146419746" lastFinishedPulling="2026-02-03 12:19:05.200415412 +0000 UTC m=+817.675311490" observedRunningTime="2026-02-03 12:19:06.151518368 +0000 UTC m=+818.626414456" watchObservedRunningTime="2026-02-03 12:19:06.159970322 +0000 UTC m=+818.634866410" Feb 03 12:19:06 crc kubenswrapper[4679]: I0203 12:19:06.735802 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:19:06 crc kubenswrapper[4679]: I0203 12:19:06.735889 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:19:07 crc kubenswrapper[4679]: I0203 12:19:07.138737 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" event={"ID":"5d978da5-5322-40f9-a7ea-c7dd2295874f","Type":"ContainerStarted","Data":"e730c491ce6d586e76e2fd74154d2c1f504bc5723e6457cdcf0375fe20780c4c"} Feb 03 12:19:07 crc kubenswrapper[4679]: I0203 12:19:07.159621 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-n5w4n" podStartSLOduration=1.107509183 podStartE2EDuration="7.159595198s" podCreationTimestamp="2026-02-03 12:19:00 +0000 UTC" firstStartedPulling="2026-02-03 12:19:00.734096572 +0000 UTC m=+813.208992670" lastFinishedPulling="2026-02-03 12:19:06.786182597 +0000 UTC m=+819.261078685" observedRunningTime="2026-02-03 12:19:07.157770784 +0000 UTC m=+819.632666882" watchObservedRunningTime="2026-02-03 12:19:07.159595198 +0000 UTC m=+819.634491296" Feb 03 12:19:09 crc kubenswrapper[4679]: I0203 12:19:09.533803 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:19:09 crc kubenswrapper[4679]: I0203 12:19:09.580413 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:19:09 crc kubenswrapper[4679]: I0203 12:19:09.766057 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:19:10 crc kubenswrapper[4679]: I0203 12:19:10.455322 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-s6vrk" Feb 03 12:19:10 crc kubenswrapper[4679]: I0203 12:19:10.831796 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:10 crc kubenswrapper[4679]: I0203 12:19:10.832170 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:10 crc kubenswrapper[4679]: I0203 12:19:10.837888 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.173524 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9dl9" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="registry-server" containerID="cri-o://e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04" gracePeriod=2 Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.180234 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55994dd87b-dw7z5" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.235419 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.562088 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.605102 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztzvv\" (UniqueName: \"kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv\") pod \"2d61e086-16c5-4499-9834-182fe7561c0f\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.605258 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content\") pod \"2d61e086-16c5-4499-9834-182fe7561c0f\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.605314 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities\") pod \"2d61e086-16c5-4499-9834-182fe7561c0f\" (UID: \"2d61e086-16c5-4499-9834-182fe7561c0f\") " Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.606475 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities" (OuterVolumeSpecName: "utilities") pod "2d61e086-16c5-4499-9834-182fe7561c0f" (UID: "2d61e086-16c5-4499-9834-182fe7561c0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.612531 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv" (OuterVolumeSpecName: "kube-api-access-ztzvv") pod "2d61e086-16c5-4499-9834-182fe7561c0f" (UID: "2d61e086-16c5-4499-9834-182fe7561c0f"). InnerVolumeSpecName "kube-api-access-ztzvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.707133 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.707206 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztzvv\" (UniqueName: \"kubernetes.io/projected/2d61e086-16c5-4499-9834-182fe7561c0f-kube-api-access-ztzvv\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.751618 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d61e086-16c5-4499-9834-182fe7561c0f" (UID: "2d61e086-16c5-4499-9834-182fe7561c0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:19:11 crc kubenswrapper[4679]: I0203 12:19:11.808559 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d61e086-16c5-4499-9834-182fe7561c0f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.182710 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d61e086-16c5-4499-9834-182fe7561c0f" containerID="e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04" exitCode=0 Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.182799 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9dl9" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.182815 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerDied","Data":"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04"} Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.182891 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9dl9" event={"ID":"2d61e086-16c5-4499-9834-182fe7561c0f","Type":"ContainerDied","Data":"9e2a9046b84df8e70c7ca6c18efad555d66a21389eb12d742a7cf0d9cc76d294"} Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.182923 4679 scope.go:117] "RemoveContainer" containerID="e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.204049 4679 scope.go:117] "RemoveContainer" containerID="38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.233522 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.239072 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9dl9"] Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.245786 4679 scope.go:117] "RemoveContainer" containerID="2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.264283 4679 scope.go:117] "RemoveContainer" containerID="e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04" Feb 03 12:19:12 crc kubenswrapper[4679]: E0203 12:19:12.264775 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04\": container with ID starting with e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04 not found: ID does not exist" containerID="e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.264821 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04"} err="failed to get container status \"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04\": rpc error: code = NotFound desc = could not find container \"e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04\": container with ID starting with e28b02ac997b3275c164d879618e3f71d250e812e48defe3671353359bdf3c04 not found: ID does not exist" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.264853 4679 scope.go:117] "RemoveContainer" containerID="38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244" Feb 03 12:19:12 crc kubenswrapper[4679]: E0203 12:19:12.265117 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244\": container with ID starting with 38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244 not found: ID does not exist" containerID="38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.265152 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244"} err="failed to get container status \"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244\": rpc error: code = NotFound desc = could not find container \"38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244\": container with ID starting with 38e3bbde2e50b263079acbafcda05d610f4fa5c178e99b33a99cb05bb7a79244 not found: ID does not exist" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.265173 4679 scope.go:117] "RemoveContainer" containerID="2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67" Feb 03 12:19:12 crc kubenswrapper[4679]: E0203 12:19:12.265464 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67\": container with ID starting with 2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67 not found: ID does not exist" containerID="2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67" Feb 03 12:19:12 crc kubenswrapper[4679]: I0203 12:19:12.265491 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67"} err="failed to get container status \"2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67\": rpc error: code = NotFound desc = could not find container \"2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67\": container with ID starting with 2703bd84ae39ab08194f7f91426ff8f33e7b0c75aad2115e046ce01b06387f67 not found: ID does not exist" Feb 03 12:19:14 crc kubenswrapper[4679]: I0203 12:19:14.222323 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" path="/var/lib/kubelet/pods/2d61e086-16c5-4499-9834-182fe7561c0f/volumes" Feb 03 12:19:21 crc kubenswrapper[4679]: I0203 12:19:21.025189 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-5fck4" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.871455 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds"] Feb 03 12:19:33 crc kubenswrapper[4679]: E0203 12:19:33.872698 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="extract-utilities" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.872714 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="extract-utilities" Feb 03 12:19:33 crc kubenswrapper[4679]: E0203 12:19:33.872724 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="registry-server" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.872730 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="registry-server" Feb 03 12:19:33 crc kubenswrapper[4679]: E0203 12:19:33.872743 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="extract-content" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.872750 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="extract-content" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.872856 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d61e086-16c5-4499-9834-182fe7561c0f" containerName="registry-server" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.873763 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.878217 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.889337 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds"] Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.985843 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.985925 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:33 crc kubenswrapper[4679]: I0203 12:19:33.986038 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcj7\" (UniqueName: \"kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.087147 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.087217 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.087341 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvcj7\" (UniqueName: \"kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.087789 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.087836 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.111186 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvcj7\" (UniqueName: \"kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.194099 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:34 crc kubenswrapper[4679]: I0203 12:19:34.451608 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds"] Feb 03 12:19:35 crc kubenswrapper[4679]: I0203 12:19:35.331792 4679 generic.go:334] "Generic (PLEG): container finished" podID="c3851540-2643-497e-a54c-d7543287ebca" containerID="027a9177be892e6ff39f6747a756a44acc7549ba76d10bbdddc8bff75b872b48" exitCode=0 Feb 03 12:19:35 crc kubenswrapper[4679]: I0203 12:19:35.331865 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" event={"ID":"c3851540-2643-497e-a54c-d7543287ebca","Type":"ContainerDied","Data":"027a9177be892e6ff39f6747a756a44acc7549ba76d10bbdddc8bff75b872b48"} Feb 03 12:19:35 crc kubenswrapper[4679]: I0203 12:19:35.332612 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" event={"ID":"c3851540-2643-497e-a54c-d7543287ebca","Type":"ContainerStarted","Data":"0f0ce2a7a455ff129107cd8047af946d2fc15f755cddd02cbdfd65fda09257e4"} Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.277859 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qlbms" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerName="console" containerID="cri-o://31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0" gracePeriod=15 Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.660113 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qlbms_42fe6faa-e19f-4b6d-acb9-df0ff4c35398/console/0.log" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.660191 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729080 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729477 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729513 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729575 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729599 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtqvf\" (UniqueName: \"kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729622 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.729723 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle\") pod \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\" (UID: \"42fe6faa-e19f-4b6d-acb9-df0ff4c35398\") " Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.730170 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.730467 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config" (OuterVolumeSpecName: "console-config") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.730880 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.730876 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca" (OuterVolumeSpecName: "service-ca") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736192 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736266 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736327 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736540 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736739 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf" (OuterVolumeSpecName: "kube-api-access-vtqvf") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "kube-api-access-vtqvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.736848 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "42fe6faa-e19f-4b6d-acb9-df0ff4c35398" (UID: "42fe6faa-e19f-4b6d-acb9-df0ff4c35398"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.737234 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.737379 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c" gracePeriod=600 Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832063 4679 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832097 4679 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832109 4679 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832118 4679 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832129 4679 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832141 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtqvf\" (UniqueName: \"kubernetes.io/projected/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-kube-api-access-vtqvf\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:36 crc kubenswrapper[4679]: I0203 12:19:36.832154 4679 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/42fe6faa-e19f-4b6d-acb9-df0ff4c35398-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.347695 4679 generic.go:334] "Generic (PLEG): container finished" podID="c3851540-2643-497e-a54c-d7543287ebca" containerID="54ac6debd9230ffb2f8d991e36a83b9050f79319b25bf0b837763e8c1261c183" exitCode=0 Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.347756 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" event={"ID":"c3851540-2643-497e-a54c-d7543287ebca","Type":"ContainerDied","Data":"54ac6debd9230ffb2f8d991e36a83b9050f79319b25bf0b837763e8c1261c183"} Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.351135 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c" exitCode=0 Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.351269 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c"} Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.351317 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6"} Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.351347 4679 scope.go:117] "RemoveContainer" containerID="5b81e2fd517b786183416342868b0696a900871d24e38432da83ad64817cbf19" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.354233 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qlbms_42fe6faa-e19f-4b6d-acb9-df0ff4c35398/console/0.log" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.354291 4679 generic.go:334] "Generic (PLEG): container finished" podID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerID="31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0" exitCode=2 Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.354343 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qlbms" event={"ID":"42fe6faa-e19f-4b6d-acb9-df0ff4c35398","Type":"ContainerDied","Data":"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0"} Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.354435 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qlbms" event={"ID":"42fe6faa-e19f-4b6d-acb9-df0ff4c35398","Type":"ContainerDied","Data":"9b9348beb4767a0db3b7e3fb9831eae77faec103df4d21ae9aadc65727ff4044"} Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.354444 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qlbms" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.384624 4679 scope.go:117] "RemoveContainer" containerID="31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.406063 4679 scope.go:117] "RemoveContainer" containerID="31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0" Feb 03 12:19:37 crc kubenswrapper[4679]: E0203 12:19:37.406818 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0\": container with ID starting with 31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0 not found: ID does not exist" containerID="31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.406857 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0"} err="failed to get container status \"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0\": rpc error: code = NotFound desc = could not find container \"31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0\": container with ID starting with 31b75a23b3c7aca1b5d6de043aa6a9200153dbb8832a8bb2fd59853d169635d0 not found: ID does not exist" Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.425410 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:19:37 crc kubenswrapper[4679]: I0203 12:19:37.432201 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qlbms"] Feb 03 12:19:38 crc kubenswrapper[4679]: I0203 12:19:38.221968 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" path="/var/lib/kubelet/pods/42fe6faa-e19f-4b6d-acb9-df0ff4c35398/volumes" Feb 03 12:19:38 crc kubenswrapper[4679]: I0203 12:19:38.378272 4679 generic.go:334] "Generic (PLEG): container finished" podID="c3851540-2643-497e-a54c-d7543287ebca" containerID="dc8190fa8478a9354b928d86afe0a396468021263dccc9c200a6cab08fde2ea4" exitCode=0 Feb 03 12:19:38 crc kubenswrapper[4679]: I0203 12:19:38.378427 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" event={"ID":"c3851540-2643-497e-a54c-d7543287ebca","Type":"ContainerDied","Data":"dc8190fa8478a9354b928d86afe0a396468021263dccc9c200a6cab08fde2ea4"} Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.624066 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.776569 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util\") pod \"c3851540-2643-497e-a54c-d7543287ebca\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.776704 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvcj7\" (UniqueName: \"kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7\") pod \"c3851540-2643-497e-a54c-d7543287ebca\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.776807 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle\") pod \"c3851540-2643-497e-a54c-d7543287ebca\" (UID: \"c3851540-2643-497e-a54c-d7543287ebca\") " Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.778334 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle" (OuterVolumeSpecName: "bundle") pod "c3851540-2643-497e-a54c-d7543287ebca" (UID: "c3851540-2643-497e-a54c-d7543287ebca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.784529 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7" (OuterVolumeSpecName: "kube-api-access-gvcj7") pod "c3851540-2643-497e-a54c-d7543287ebca" (UID: "c3851540-2643-497e-a54c-d7543287ebca"). InnerVolumeSpecName "kube-api-access-gvcj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.878470 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvcj7\" (UniqueName: \"kubernetes.io/projected/c3851540-2643-497e-a54c-d7543287ebca-kube-api-access-gvcj7\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:39 crc kubenswrapper[4679]: I0203 12:19:39.878981 4679 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:40 crc kubenswrapper[4679]: I0203 12:19:40.138900 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util" (OuterVolumeSpecName: "util") pod "c3851540-2643-497e-a54c-d7543287ebca" (UID: "c3851540-2643-497e-a54c-d7543287ebca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:19:40 crc kubenswrapper[4679]: I0203 12:19:40.183586 4679 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3851540-2643-497e-a54c-d7543287ebca-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:19:40 crc kubenswrapper[4679]: I0203 12:19:40.400890 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" event={"ID":"c3851540-2643-497e-a54c-d7543287ebca","Type":"ContainerDied","Data":"0f0ce2a7a455ff129107cd8047af946d2fc15f755cddd02cbdfd65fda09257e4"} Feb 03 12:19:40 crc kubenswrapper[4679]: I0203 12:19:40.400938 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds" Feb 03 12:19:40 crc kubenswrapper[4679]: I0203 12:19:40.400952 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0ce2a7a455ff129107cd8047af946d2fc15f755cddd02cbdfd65fda09257e4" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.898716 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz"] Feb 03 12:19:48 crc kubenswrapper[4679]: E0203 12:19:48.899997 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="util" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900014 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="util" Feb 03 12:19:48 crc kubenswrapper[4679]: E0203 12:19:48.900039 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="pull" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900046 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="pull" Feb 03 12:19:48 crc kubenswrapper[4679]: E0203 12:19:48.900060 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerName="console" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900066 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerName="console" Feb 03 12:19:48 crc kubenswrapper[4679]: E0203 12:19:48.900080 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="extract" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900090 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="extract" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900206 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fe6faa-e19f-4b6d-acb9-df0ff4c35398" containerName="console" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900222 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3851540-2643-497e-a54c-d7543287ebca" containerName="extract" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.900823 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.904194 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.904202 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.904399 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.904399 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ptlqk" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.905530 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 12:19:48 crc kubenswrapper[4679]: I0203 12:19:48.920043 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz"] Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.034851 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-webhook-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.035521 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lz29\" (UniqueName: \"kubernetes.io/projected/2b8aafdc-129f-420c-a901-fa59576bf426-kube-api-access-6lz29\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.035599 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-apiservice-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.137049 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lz29\" (UniqueName: \"kubernetes.io/projected/2b8aafdc-129f-420c-a901-fa59576bf426-kube-api-access-6lz29\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.137120 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-apiservice-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.137200 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-webhook-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.159772 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-apiservice-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.160174 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8aafdc-129f-420c-a901-fa59576bf426-webhook-cert\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.181857 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lz29\" (UniqueName: \"kubernetes.io/projected/2b8aafdc-129f-420c-a901-fa59576bf426-kube-api-access-6lz29\") pod \"metallb-operator-controller-manager-696d65d798-4rvqz\" (UID: \"2b8aafdc-129f-420c-a901-fa59576bf426\") " pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.186935 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b48565759-btpsb"] Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.188006 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.190862 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.191284 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-w7w52" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.191869 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.269055 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.300229 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b48565759-btpsb"] Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.340958 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-webhook-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.341023 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8z2\" (UniqueName: \"kubernetes.io/projected/8e1b318f-e557-49ba-91c9-3489ccb19246-kube-api-access-lk8z2\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.341094 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-apiservice-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.442684 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-webhook-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.443263 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8z2\" (UniqueName: \"kubernetes.io/projected/8e1b318f-e557-49ba-91c9-3489ccb19246-kube-api-access-lk8z2\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.443321 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-apiservice-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.448804 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-apiservice-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.455052 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e1b318f-e557-49ba-91c9-3489ccb19246-webhook-cert\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.478219 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8z2\" (UniqueName: \"kubernetes.io/projected/8e1b318f-e557-49ba-91c9-3489ccb19246-kube-api-access-lk8z2\") pod \"metallb-operator-webhook-server-7b48565759-btpsb\" (UID: \"8e1b318f-e557-49ba-91c9-3489ccb19246\") " pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.543063 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.719696 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz"] Feb 03 12:19:49 crc kubenswrapper[4679]: I0203 12:19:49.833245 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b48565759-btpsb"] Feb 03 12:19:49 crc kubenswrapper[4679]: W0203 12:19:49.840907 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e1b318f_e557_49ba_91c9_3489ccb19246.slice/crio-87d60b5b61314f539534056fcb268edbf2cfa476b65a2eb364d8ccedce728d5b WatchSource:0}: Error finding container 87d60b5b61314f539534056fcb268edbf2cfa476b65a2eb364d8ccedce728d5b: Status 404 returned error can't find the container with id 87d60b5b61314f539534056fcb268edbf2cfa476b65a2eb364d8ccedce728d5b Feb 03 12:19:50 crc kubenswrapper[4679]: I0203 12:19:50.477351 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" event={"ID":"2b8aafdc-129f-420c-a901-fa59576bf426","Type":"ContainerStarted","Data":"27065bd866dac73ce84da89965078d50a7a5e0db6828de754d4e0d64057bf6ce"} Feb 03 12:19:50 crc kubenswrapper[4679]: I0203 12:19:50.479476 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" event={"ID":"8e1b318f-e557-49ba-91c9-3489ccb19246","Type":"ContainerStarted","Data":"87d60b5b61314f539534056fcb268edbf2cfa476b65a2eb364d8ccedce728d5b"} Feb 03 12:19:55 crc kubenswrapper[4679]: I0203 12:19:55.526439 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" event={"ID":"2b8aafdc-129f-420c-a901-fa59576bf426","Type":"ContainerStarted","Data":"b73c35afc9bbceccc7e0aab0d14aa81b5bd3edeca3232b01deb7b09f3b39f9a8"} Feb 03 12:19:55 crc kubenswrapper[4679]: I0203 12:19:55.527308 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:19:55 crc kubenswrapper[4679]: I0203 12:19:55.550457 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" podStartSLOduration=2.894538699 podStartE2EDuration="7.550431689s" podCreationTimestamp="2026-02-03 12:19:48 +0000 UTC" firstStartedPulling="2026-02-03 12:19:49.736753471 +0000 UTC m=+862.211649559" lastFinishedPulling="2026-02-03 12:19:54.392646461 +0000 UTC m=+866.867542549" observedRunningTime="2026-02-03 12:19:55.548612635 +0000 UTC m=+868.023508723" watchObservedRunningTime="2026-02-03 12:19:55.550431689 +0000 UTC m=+868.025327777" Feb 03 12:19:59 crc kubenswrapper[4679]: I0203 12:19:59.560005 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" event={"ID":"8e1b318f-e557-49ba-91c9-3489ccb19246","Type":"ContainerStarted","Data":"a7ad04d57a9e23213b88542925b82aa2736cbf3a25b2133a55c74c49db836e30"} Feb 03 12:19:59 crc kubenswrapper[4679]: I0203 12:19:59.560943 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:19:59 crc kubenswrapper[4679]: I0203 12:19:59.593555 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" podStartSLOduration=1.961776701 podStartE2EDuration="10.593531937s" podCreationTimestamp="2026-02-03 12:19:49 +0000 UTC" firstStartedPulling="2026-02-03 12:19:49.846096017 +0000 UTC m=+862.320992105" lastFinishedPulling="2026-02-03 12:19:58.477851253 +0000 UTC m=+870.952747341" observedRunningTime="2026-02-03 12:19:59.584054759 +0000 UTC m=+872.058950847" watchObservedRunningTime="2026-02-03 12:19:59.593531937 +0000 UTC m=+872.068428025" Feb 03 12:20:09 crc kubenswrapper[4679]: I0203 12:20:09.582616 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b48565759-btpsb" Feb 03 12:20:29 crc kubenswrapper[4679]: I0203 12:20:29.272018 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-696d65d798-4rvqz" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.019259 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jqlc5"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.022389 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.023290 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.024187 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.024327 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.025674 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.025714 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kj4nn" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.025883 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.035155 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086479 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5q92\" (UniqueName: \"kubernetes.io/projected/97eb643e-6db5-4612-acbf-eef52bbd1cba-kube-api-access-r5q92\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086550 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics-certs\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086580 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-conf\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086602 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-sockets\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086863 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-startup\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086921 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-reloader\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.086982 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxh6r\" (UniqueName: \"kubernetes.io/projected/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-kube-api-access-mxh6r\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.087047 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.087099 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.123874 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-72kj8"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.125111 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.130046 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.130060 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.130963 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dgl6r" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.131294 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.147889 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-sd924"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.149058 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.151968 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.170810 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sd924"] Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188315 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-conf\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188418 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-sockets\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188462 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmk4j\" (UniqueName: \"kubernetes.io/projected/8f94f678-3ab0-4078-b6ad-361e9326083c-kube-api-access-xmk4j\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188486 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-startup\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188505 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-reloader\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188534 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxh6r\" (UniqueName: \"kubernetes.io/projected/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-kube-api-access-mxh6r\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188558 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188581 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188602 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188622 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-cert\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188646 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metrics-certs\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188676 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5w7\" (UniqueName: \"kubernetes.io/projected/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-kube-api-access-sh5w7\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188700 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5q92\" (UniqueName: \"kubernetes.io/projected/97eb643e-6db5-4612-acbf-eef52bbd1cba-kube-api-access-r5q92\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188730 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188756 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metallb-excludel2\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.188777 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics-certs\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.189478 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-conf\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.189552 4679 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.189744 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert podName:97eb643e-6db5-4612-acbf-eef52bbd1cba nodeName:}" failed. No retries permitted until 2026-02-03 12:20:30.689643133 +0000 UTC m=+903.164539451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert") pod "frr-k8s-webhook-server-7df86c4f6c-w9d6c" (UID: "97eb643e-6db5-4612-acbf-eef52bbd1cba") : secret "frr-k8s-webhook-server-cert" not found Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.190066 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-reloader\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.190148 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-sockets\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.190424 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.190708 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-frr-startup\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.210407 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-metrics-certs\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.215676 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5q92\" (UniqueName: \"kubernetes.io/projected/97eb643e-6db5-4612-acbf-eef52bbd1cba-kube-api-access-r5q92\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.217174 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxh6r\" (UniqueName: \"kubernetes.io/projected/cd3be7cf-aa64-4ffc-8b96-a567d85a2c35-kube-api-access-mxh6r\") pod \"frr-k8s-jqlc5\" (UID: \"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35\") " pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290239 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metrics-certs\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290323 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh5w7\" (UniqueName: \"kubernetes.io/projected/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-kube-api-access-sh5w7\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290383 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290413 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metallb-excludel2\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290467 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk4j\" (UniqueName: \"kubernetes.io/projected/8f94f678-3ab0-4078-b6ad-361e9326083c-kube-api-access-xmk4j\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290534 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.290569 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-cert\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.290819 4679 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.290902 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs podName:8f94f678-3ab0-4078-b6ad-361e9326083c nodeName:}" failed. No retries permitted until 2026-02-03 12:20:30.790877074 +0000 UTC m=+903.265773162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs") pod "controller-6968d8fdc4-sd924" (UID: "8f94f678-3ab0-4078-b6ad-361e9326083c") : secret "controller-certs-secret" not found Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.291157 4679 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.291238 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist podName:7c2c7dcb-cc91-4794-baaf-f766c8e7cd55 nodeName:}" failed. No retries permitted until 2026-02-03 12:20:30.791213723 +0000 UTC m=+903.266110011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist") pod "speaker-72kj8" (UID: "7c2c7dcb-cc91-4794-baaf-f766c8e7cd55") : secret "metallb-memberlist" not found Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.291758 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metallb-excludel2\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.293654 4679 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.294415 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-metrics-certs\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.305013 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-cert\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.310388 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh5w7\" (UniqueName: \"kubernetes.io/projected/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-kube-api-access-sh5w7\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.314540 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk4j\" (UniqueName: \"kubernetes.io/projected/8f94f678-3ab0-4078-b6ad-361e9326083c-kube-api-access-xmk4j\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.341741 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.697251 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.701039 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97eb643e-6db5-4612-acbf-eef52bbd1cba-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w9d6c\" (UID: \"97eb643e-6db5-4612-acbf-eef52bbd1cba\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.769166 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"cd32cc0cfeb1b81b6e4b1b2b42b495ffb6ac160c063c6dd9f5c861b7e9e3b51d"} Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.798739 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.798833 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.799056 4679 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 12:20:30 crc kubenswrapper[4679]: E0203 12:20:30.799162 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist podName:7c2c7dcb-cc91-4794-baaf-f766c8e7cd55 nodeName:}" failed. No retries permitted until 2026-02-03 12:20:31.799136334 +0000 UTC m=+904.274032422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist") pod "speaker-72kj8" (UID: "7c2c7dcb-cc91-4794-baaf-f766c8e7cd55") : secret "metallb-memberlist" not found Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.802345 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f94f678-3ab0-4078-b6ad-361e9326083c-metrics-certs\") pod \"controller-6968d8fdc4-sd924\" (UID: \"8f94f678-3ab0-4078-b6ad-361e9326083c\") " pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:30 crc kubenswrapper[4679]: I0203 12:20:30.997143 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.064521 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.273387 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c"] Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.355971 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sd924"] Feb 03 12:20:31 crc kubenswrapper[4679]: W0203 12:20:31.357509 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f94f678_3ab0_4078_b6ad_361e9326083c.slice/crio-1e0912944e5fe36809cbcd4b3d95f04bbd75d88a4b11a2e25f2117daf0c0715e WatchSource:0}: Error finding container 1e0912944e5fe36809cbcd4b3d95f04bbd75d88a4b11a2e25f2117daf0c0715e: Status 404 returned error can't find the container with id 1e0912944e5fe36809cbcd4b3d95f04bbd75d88a4b11a2e25f2117daf0c0715e Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.777156 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" event={"ID":"97eb643e-6db5-4612-acbf-eef52bbd1cba","Type":"ContainerStarted","Data":"596ba35cf179441d5c854ac504f0ecb6e28f8dd2221b29e8c5d237c49515da3f"} Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.780475 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sd924" event={"ID":"8f94f678-3ab0-4078-b6ad-361e9326083c","Type":"ContainerStarted","Data":"f74f16ed04cdf2083c766d34b3329b954de8ef3a9685a776c6867efb8e970d26"} Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.780535 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sd924" event={"ID":"8f94f678-3ab0-4078-b6ad-361e9326083c","Type":"ContainerStarted","Data":"0fbad953687744c3e06262a27508844762d49748b7183fd92dc4aa5ab8e82664"} Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.780548 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sd924" event={"ID":"8f94f678-3ab0-4078-b6ad-361e9326083c","Type":"ContainerStarted","Data":"1e0912944e5fe36809cbcd4b3d95f04bbd75d88a4b11a2e25f2117daf0c0715e"} Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.781706 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.798890 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-sd924" podStartSLOduration=1.7988654670000002 podStartE2EDuration="1.798865467s" podCreationTimestamp="2026-02-03 12:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:20:31.798780774 +0000 UTC m=+904.273676862" watchObservedRunningTime="2026-02-03 12:20:31.798865467 +0000 UTC m=+904.273761555" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.815397 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.821955 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7c2c7dcb-cc91-4794-baaf-f766c8e7cd55-memberlist\") pod \"speaker-72kj8\" (UID: \"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55\") " pod="metallb-system/speaker-72kj8" Feb 03 12:20:31 crc kubenswrapper[4679]: I0203 12:20:31.941618 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-72kj8" Feb 03 12:20:31 crc kubenswrapper[4679]: W0203 12:20:31.967900 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2c7dcb_cc91_4794_baaf_f766c8e7cd55.slice/crio-a5b3a78ddd5031c99c090380cb3018b7a9a27cfa284b6f4f9c61b8b99371db1e WatchSource:0}: Error finding container a5b3a78ddd5031c99c090380cb3018b7a9a27cfa284b6f4f9c61b8b99371db1e: Status 404 returned error can't find the container with id a5b3a78ddd5031c99c090380cb3018b7a9a27cfa284b6f4f9c61b8b99371db1e Feb 03 12:20:32 crc kubenswrapper[4679]: I0203 12:20:32.793069 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-72kj8" event={"ID":"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55","Type":"ContainerStarted","Data":"bff6f42f7ce971b6e76a82bf39954e2be96a6bdeb3ad8a2753717f8b10d83464"} Feb 03 12:20:32 crc kubenswrapper[4679]: I0203 12:20:32.794292 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-72kj8" event={"ID":"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55","Type":"ContainerStarted","Data":"747a458962ccd74a35161dde3257d95bb7eb9c71237e803cc2211d76171a2737"} Feb 03 12:20:32 crc kubenswrapper[4679]: I0203 12:20:32.794412 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-72kj8" event={"ID":"7c2c7dcb-cc91-4794-baaf-f766c8e7cd55","Type":"ContainerStarted","Data":"a5b3a78ddd5031c99c090380cb3018b7a9a27cfa284b6f4f9c61b8b99371db1e"} Feb 03 12:20:32 crc kubenswrapper[4679]: I0203 12:20:32.794529 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-72kj8" Feb 03 12:20:38 crc kubenswrapper[4679]: I0203 12:20:38.238856 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-72kj8" podStartSLOduration=8.238830736 podStartE2EDuration="8.238830736s" podCreationTimestamp="2026-02-03 12:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:20:32.822125705 +0000 UTC m=+905.297021793" watchObservedRunningTime="2026-02-03 12:20:38.238830736 +0000 UTC m=+910.713726824" Feb 03 12:20:38 crc kubenswrapper[4679]: I0203 12:20:38.859377 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" event={"ID":"97eb643e-6db5-4612-acbf-eef52bbd1cba","Type":"ContainerStarted","Data":"32d8f89e7eac9bdfffd6b1c44e4252844fed99f0222e8100e930c2a36c070fe0"} Feb 03 12:20:38 crc kubenswrapper[4679]: I0203 12:20:38.859959 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:38 crc kubenswrapper[4679]: I0203 12:20:38.864884 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"e1b6227e6a70ba44d4803cdcd635d193df01d453883a0126778a4c63614b7f0b"} Feb 03 12:20:38 crc kubenswrapper[4679]: I0203 12:20:38.884439 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" podStartSLOduration=1.483146153 podStartE2EDuration="8.884407797s" podCreationTimestamp="2026-02-03 12:20:30 +0000 UTC" firstStartedPulling="2026-02-03 12:20:31.288956322 +0000 UTC m=+903.763852400" lastFinishedPulling="2026-02-03 12:20:38.690217946 +0000 UTC m=+911.165114044" observedRunningTime="2026-02-03 12:20:38.880619999 +0000 UTC m=+911.355516097" watchObservedRunningTime="2026-02-03 12:20:38.884407797 +0000 UTC m=+911.359303885" Feb 03 12:20:39 crc kubenswrapper[4679]: I0203 12:20:39.874902 4679 generic.go:334] "Generic (PLEG): container finished" podID="cd3be7cf-aa64-4ffc-8b96-a567d85a2c35" containerID="e1b6227e6a70ba44d4803cdcd635d193df01d453883a0126778a4c63614b7f0b" exitCode=0 Feb 03 12:20:39 crc kubenswrapper[4679]: I0203 12:20:39.875464 4679 generic.go:334] "Generic (PLEG): container finished" podID="cd3be7cf-aa64-4ffc-8b96-a567d85a2c35" containerID="676bf7bdc80b27ad329698174c58ebd142bba0effe60fbae700f240165bb3bfb" exitCode=0 Feb 03 12:20:39 crc kubenswrapper[4679]: I0203 12:20:39.875077 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerDied","Data":"e1b6227e6a70ba44d4803cdcd635d193df01d453883a0126778a4c63614b7f0b"} Feb 03 12:20:39 crc kubenswrapper[4679]: I0203 12:20:39.875583 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerDied","Data":"676bf7bdc80b27ad329698174c58ebd142bba0effe60fbae700f240165bb3bfb"} Feb 03 12:20:40 crc kubenswrapper[4679]: I0203 12:20:40.885302 4679 generic.go:334] "Generic (PLEG): container finished" podID="cd3be7cf-aa64-4ffc-8b96-a567d85a2c35" containerID="77880d17646a699ca0689899eb23b2214202fc9c411820a2f21748fa7358ad81" exitCode=0 Feb 03 12:20:40 crc kubenswrapper[4679]: I0203 12:20:40.885375 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerDied","Data":"77880d17646a699ca0689899eb23b2214202fc9c411820a2f21748fa7358ad81"} Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.070300 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-sd924" Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.901805 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"c3a862977e077d5c6e3dc5d49932c9238227faa5056a8b8dddf87d7d51f450c5"} Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.901875 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"3d18925bbb5a6c9f46be9dad39d212b302c9ce95042084870af5a3d277dd1d0e"} Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.901886 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"9760b571c35110ffd6237b3286692c7903961a67418a6d1e4133d567b7d37479"} Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.901902 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"e3716fb854d6fe501f6ea50a755c423970c0a0bf395e86af9b3fdd7e71f278b7"} Feb 03 12:20:41 crc kubenswrapper[4679]: I0203 12:20:41.901913 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"c0fe5c7551824ebec3ea1c02466883682fd15f9f6a25b1dda1d3fdaf5b2fea7c"} Feb 03 12:20:42 crc kubenswrapper[4679]: I0203 12:20:42.914783 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jqlc5" event={"ID":"cd3be7cf-aa64-4ffc-8b96-a567d85a2c35","Type":"ContainerStarted","Data":"1381a7bd37e44936cbc44bc4bb02bf16a4d15e418a36d2177483ccc1dc8fc04b"} Feb 03 12:20:42 crc kubenswrapper[4679]: I0203 12:20:42.916325 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:42 crc kubenswrapper[4679]: I0203 12:20:42.949231 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jqlc5" podStartSLOduration=5.746166586 podStartE2EDuration="13.949184265s" podCreationTimestamp="2026-02-03 12:20:29 +0000 UTC" firstStartedPulling="2026-02-03 12:20:30.461193535 +0000 UTC m=+902.936089623" lastFinishedPulling="2026-02-03 12:20:38.664211214 +0000 UTC m=+911.139107302" observedRunningTime="2026-02-03 12:20:42.944891424 +0000 UTC m=+915.419787512" watchObservedRunningTime="2026-02-03 12:20:42.949184265 +0000 UTC m=+915.424080353" Feb 03 12:20:45 crc kubenswrapper[4679]: I0203 12:20:45.342134 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:45 crc kubenswrapper[4679]: I0203 12:20:45.382723 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:20:51 crc kubenswrapper[4679]: I0203 12:20:51.194733 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w9d6c" Feb 03 12:20:51 crc kubenswrapper[4679]: I0203 12:20:51.949219 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-72kj8" Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.765439 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.766654 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.768705 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nmd6f" Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.770133 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.771972 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.850424 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:20:54 crc kubenswrapper[4679]: I0203 12:20:54.899579 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56r2\" (UniqueName: \"kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2\") pod \"openstack-operator-index-kkcns\" (UID: \"3ae3a06f-8c10-44f2-9056-063c1f06235e\") " pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:55 crc kubenswrapper[4679]: I0203 12:20:55.003009 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56r2\" (UniqueName: \"kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2\") pod \"openstack-operator-index-kkcns\" (UID: \"3ae3a06f-8c10-44f2-9056-063c1f06235e\") " pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:55 crc kubenswrapper[4679]: I0203 12:20:55.023998 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56r2\" (UniqueName: \"kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2\") pod \"openstack-operator-index-kkcns\" (UID: \"3ae3a06f-8c10-44f2-9056-063c1f06235e\") " pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:55 crc kubenswrapper[4679]: I0203 12:20:55.090969 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:55 crc kubenswrapper[4679]: I0203 12:20:55.336483 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:20:56 crc kubenswrapper[4679]: I0203 12:20:56.236203 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kkcns" event={"ID":"3ae3a06f-8c10-44f2-9056-063c1f06235e","Type":"ContainerStarted","Data":"dac14bf4fb1804708adc5bb66c84e05dc9f40e242800c5a0d93263b471534d71"} Feb 03 12:20:58 crc kubenswrapper[4679]: I0203 12:20:58.932900 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.259939 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kkcns" event={"ID":"3ae3a06f-8c10-44f2-9056-063c1f06235e","Type":"ContainerStarted","Data":"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5"} Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.260090 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kkcns" podUID="3ae3a06f-8c10-44f2-9056-063c1f06235e" containerName="registry-server" containerID="cri-o://bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5" gracePeriod=2 Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.282764 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kkcns" podStartSLOduration=1.820872461 podStartE2EDuration="5.28269226s" podCreationTimestamp="2026-02-03 12:20:54 +0000 UTC" firstStartedPulling="2026-02-03 12:20:55.3431147 +0000 UTC m=+927.818010788" lastFinishedPulling="2026-02-03 12:20:58.804934499 +0000 UTC m=+931.279830587" observedRunningTime="2026-02-03 12:20:59.275686039 +0000 UTC m=+931.750582137" watchObservedRunningTime="2026-02-03 12:20:59.28269226 +0000 UTC m=+931.757588348" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.744254 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-248q5"] Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.745565 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.750929 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.763420 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-248q5"] Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.787821 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x56r2\" (UniqueName: \"kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2\") pod \"3ae3a06f-8c10-44f2-9056-063c1f06235e\" (UID: \"3ae3a06f-8c10-44f2-9056-063c1f06235e\") " Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.789789 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rf8\" (UniqueName: \"kubernetes.io/projected/cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a-kube-api-access-98rf8\") pod \"openstack-operator-index-248q5\" (UID: \"cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a\") " pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.797262 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2" (OuterVolumeSpecName: "kube-api-access-x56r2") pod "3ae3a06f-8c10-44f2-9056-063c1f06235e" (UID: "3ae3a06f-8c10-44f2-9056-063c1f06235e"). InnerVolumeSpecName "kube-api-access-x56r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.892275 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98rf8\" (UniqueName: \"kubernetes.io/projected/cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a-kube-api-access-98rf8\") pod \"openstack-operator-index-248q5\" (UID: \"cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a\") " pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.892548 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x56r2\" (UniqueName: \"kubernetes.io/projected/3ae3a06f-8c10-44f2-9056-063c1f06235e-kube-api-access-x56r2\") on node \"crc\" DevicePath \"\"" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.912251 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98rf8\" (UniqueName: \"kubernetes.io/projected/cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a-kube-api-access-98rf8\") pod \"openstack-operator-index-248q5\" (UID: \"cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a\") " pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.948034 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:20:59 crc kubenswrapper[4679]: E0203 12:20:59.948429 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae3a06f-8c10-44f2-9056-063c1f06235e" containerName="registry-server" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.948444 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae3a06f-8c10-44f2-9056-063c1f06235e" containerName="registry-server" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.948611 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae3a06f-8c10-44f2-9056-063c1f06235e" containerName="registry-server" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.949722 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.964867 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.993638 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.994011 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:20:59 crc kubenswrapper[4679]: I0203 12:20:59.994171 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltksf\" (UniqueName: \"kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.075504 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.096181 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.096556 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.096917 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.096992 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.097063 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltksf\" (UniqueName: \"kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.117140 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltksf\" (UniqueName: \"kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf\") pod \"certified-operators-x2ffk\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.273243 4679 generic.go:334] "Generic (PLEG): container finished" podID="3ae3a06f-8c10-44f2-9056-063c1f06235e" containerID="bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5" exitCode=0 Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.273626 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kkcns" event={"ID":"3ae3a06f-8c10-44f2-9056-063c1f06235e","Type":"ContainerDied","Data":"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5"} Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.273668 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kkcns" event={"ID":"3ae3a06f-8c10-44f2-9056-063c1f06235e","Type":"ContainerDied","Data":"dac14bf4fb1804708adc5bb66c84e05dc9f40e242800c5a0d93263b471534d71"} Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.273691 4679 scope.go:117] "RemoveContainer" containerID="bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.273839 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kkcns" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.283240 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.310544 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.316277 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kkcns"] Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.325392 4679 scope.go:117] "RemoveContainer" containerID="bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5" Feb 03 12:21:00 crc kubenswrapper[4679]: E0203 12:21:00.325973 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5\": container with ID starting with bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5 not found: ID does not exist" containerID="bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.326008 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5"} err="failed to get container status \"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5\": rpc error: code = NotFound desc = could not find container \"bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5\": container with ID starting with bab85c2837a2c421b73aae7e811b590749d2976faa6feed6f90d6f096d5494b5 not found: ID does not exist" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.355733 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jqlc5" Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.366405 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-248q5"] Feb 03 12:21:00 crc kubenswrapper[4679]: W0203 12:21:00.381272 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf6f1209_4fa8_4e3c_ba2d_0ebc986ead4a.slice/crio-dd0f612728b7347454fbfab8c10fc0f915dafc3f9ed7d9b1a8acb996bc8793b8 WatchSource:0}: Error finding container dd0f612728b7347454fbfab8c10fc0f915dafc3f9ed7d9b1a8acb996bc8793b8: Status 404 returned error can't find the container with id dd0f612728b7347454fbfab8c10fc0f915dafc3f9ed7d9b1a8acb996bc8793b8 Feb 03 12:21:00 crc kubenswrapper[4679]: I0203 12:21:00.611572 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.283831 4679 generic.go:334] "Generic (PLEG): container finished" podID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerID="c891a77b31954f0657457b6b302f319575abfc4c138b7ac25e490387cab38a08" exitCode=0 Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.283974 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerDied","Data":"c891a77b31954f0657457b6b302f319575abfc4c138b7ac25e490387cab38a08"} Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.285289 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerStarted","Data":"6cbc30bd6961a2ac9151296c7f229f9d2ad4be9d819d1be96545939233d6f939"} Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.287871 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-248q5" event={"ID":"cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a","Type":"ContainerStarted","Data":"86f52f360a90f097a1051d7a08adee5125ca40d2b90ef944ea501a82aa54cda6"} Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.287954 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-248q5" event={"ID":"cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a","Type":"ContainerStarted","Data":"dd0f612728b7347454fbfab8c10fc0f915dafc3f9ed7d9b1a8acb996bc8793b8"} Feb 03 12:21:01 crc kubenswrapper[4679]: I0203 12:21:01.328666 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-248q5" podStartSLOduration=2.23753341 podStartE2EDuration="2.328641075s" podCreationTimestamp="2026-02-03 12:20:59 +0000 UTC" firstStartedPulling="2026-02-03 12:21:00.386446906 +0000 UTC m=+932.861342994" lastFinishedPulling="2026-02-03 12:21:00.477554561 +0000 UTC m=+932.952450659" observedRunningTime="2026-02-03 12:21:01.328426119 +0000 UTC m=+933.803322217" watchObservedRunningTime="2026-02-03 12:21:01.328641075 +0000 UTC m=+933.803537163" Feb 03 12:21:02 crc kubenswrapper[4679]: I0203 12:21:02.220891 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae3a06f-8c10-44f2-9056-063c1f06235e" path="/var/lib/kubelet/pods/3ae3a06f-8c10-44f2-9056-063c1f06235e/volumes" Feb 03 12:21:02 crc kubenswrapper[4679]: I0203 12:21:02.302585 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerStarted","Data":"c804409fabc94a0e9423fb111a8a281aa6d887287275dc03c8e87a4086eaeda4"} Feb 03 12:21:03 crc kubenswrapper[4679]: I0203 12:21:03.313125 4679 generic.go:334] "Generic (PLEG): container finished" podID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerID="c804409fabc94a0e9423fb111a8a281aa6d887287275dc03c8e87a4086eaeda4" exitCode=0 Feb 03 12:21:03 crc kubenswrapper[4679]: I0203 12:21:03.313190 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerDied","Data":"c804409fabc94a0e9423fb111a8a281aa6d887287275dc03c8e87a4086eaeda4"} Feb 03 12:21:03 crc kubenswrapper[4679]: I0203 12:21:03.951246 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:03 crc kubenswrapper[4679]: I0203 12:21:03.953173 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:03 crc kubenswrapper[4679]: I0203 12:21:03.970223 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.076815 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.076883 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.076924 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pjn\" (UniqueName: \"kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.178881 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.179227 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.179319 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pjn\" (UniqueName: \"kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.179682 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.179981 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.206487 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pjn\" (UniqueName: \"kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn\") pod \"redhat-marketplace-gj5zg\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.283249 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.332098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerStarted","Data":"44c170eaa1a0f61c855df82034e377a004e5107e50e835fc03fb8ca6ed2d69bc"} Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.348803 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2ffk" podStartSLOduration=2.792504797 podStartE2EDuration="5.348776955s" podCreationTimestamp="2026-02-03 12:20:59 +0000 UTC" firstStartedPulling="2026-02-03 12:21:01.286099455 +0000 UTC m=+933.760995543" lastFinishedPulling="2026-02-03 12:21:03.842371613 +0000 UTC m=+936.317267701" observedRunningTime="2026-02-03 12:21:04.34778245 +0000 UTC m=+936.822678538" watchObservedRunningTime="2026-02-03 12:21:04.348776955 +0000 UTC m=+936.823673043" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.800753 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.946757 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.948416 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.970600 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.993668 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hgn\" (UniqueName: \"kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.993730 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:04 crc kubenswrapper[4679]: I0203 12:21:04.993867 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.096171 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.096345 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hgn\" (UniqueName: \"kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.096417 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.096944 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.097059 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.122635 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hgn\" (UniqueName: \"kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn\") pod \"community-operators-7fqk2\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.274109 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.353802 4679 generic.go:334] "Generic (PLEG): container finished" podID="4c8149aa-107e-456f-bbc4-559a485542ea" containerID="09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26" exitCode=0 Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.355415 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerDied","Data":"09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26"} Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.355457 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerStarted","Data":"32ed2113b7117cb3153228da7212ab130112e96ad8d32258a8e40e1c92760b30"} Feb 03 12:21:05 crc kubenswrapper[4679]: I0203 12:21:05.867047 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:06 crc kubenswrapper[4679]: I0203 12:21:06.363930 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerStarted","Data":"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e"} Feb 03 12:21:06 crc kubenswrapper[4679]: I0203 12:21:06.367026 4679 generic.go:334] "Generic (PLEG): container finished" podID="118753fe-f98a-427f-912a-a263b41c5056" containerID="e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e" exitCode=0 Feb 03 12:21:06 crc kubenswrapper[4679]: I0203 12:21:06.367092 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerDied","Data":"e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e"} Feb 03 12:21:06 crc kubenswrapper[4679]: I0203 12:21:06.367130 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerStarted","Data":"a2ee194e279310c14b8c96d20392a522a45cbf5d5703744f2403e1a4ffc2532d"} Feb 03 12:21:07 crc kubenswrapper[4679]: I0203 12:21:07.409943 4679 generic.go:334] "Generic (PLEG): container finished" podID="4c8149aa-107e-456f-bbc4-559a485542ea" containerID="d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e" exitCode=0 Feb 03 12:21:07 crc kubenswrapper[4679]: I0203 12:21:07.410480 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerDied","Data":"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e"} Feb 03 12:21:07 crc kubenswrapper[4679]: I0203 12:21:07.427636 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerStarted","Data":"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797"} Feb 03 12:21:08 crc kubenswrapper[4679]: I0203 12:21:08.437933 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerStarted","Data":"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b"} Feb 03 12:21:08 crc kubenswrapper[4679]: I0203 12:21:08.440674 4679 generic.go:334] "Generic (PLEG): container finished" podID="118753fe-f98a-427f-912a-a263b41c5056" containerID="6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797" exitCode=0 Feb 03 12:21:08 crc kubenswrapper[4679]: I0203 12:21:08.440758 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerDied","Data":"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797"} Feb 03 12:21:08 crc kubenswrapper[4679]: I0203 12:21:08.460553 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gj5zg" podStartSLOduration=2.82108895 podStartE2EDuration="5.460523298s" podCreationTimestamp="2026-02-03 12:21:03 +0000 UTC" firstStartedPulling="2026-02-03 12:21:05.357443183 +0000 UTC m=+937.832339271" lastFinishedPulling="2026-02-03 12:21:07.996877531 +0000 UTC m=+940.471773619" observedRunningTime="2026-02-03 12:21:08.460142578 +0000 UTC m=+940.935038666" watchObservedRunningTime="2026-02-03 12:21:08.460523298 +0000 UTC m=+940.935419386" Feb 03 12:21:09 crc kubenswrapper[4679]: I0203 12:21:09.450210 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerStarted","Data":"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f"} Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.076671 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.076732 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.116210 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.139836 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7fqk2" podStartSLOduration=3.596187191 podStartE2EDuration="6.139802912s" podCreationTimestamp="2026-02-03 12:21:04 +0000 UTC" firstStartedPulling="2026-02-03 12:21:06.375550404 +0000 UTC m=+938.850446492" lastFinishedPulling="2026-02-03 12:21:08.919166125 +0000 UTC m=+941.394062213" observedRunningTime="2026-02-03 12:21:09.500580597 +0000 UTC m=+941.975476685" watchObservedRunningTime="2026-02-03 12:21:10.139802912 +0000 UTC m=+942.614699000" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.284463 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.284580 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.330950 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.511749 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-248q5" Feb 03 12:21:10 crc kubenswrapper[4679]: I0203 12:21:10.512920 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:11 crc kubenswrapper[4679]: I0203 12:21:11.990710 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj"] Feb 03 12:21:11 crc kubenswrapper[4679]: I0203 12:21:11.992455 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:11 crc kubenswrapper[4679]: I0203 12:21:11.995650 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fzdfj" Feb 03 12:21:11 crc kubenswrapper[4679]: I0203 12:21:11.995723 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj"] Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.020111 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.020176 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwsz\" (UniqueName: \"kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.020499 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.121548 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.121654 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.121678 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwsz\" (UniqueName: \"kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.122238 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.122387 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.142541 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwsz\" (UniqueName: \"kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz\") pod \"3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.310320 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:12 crc kubenswrapper[4679]: I0203 12:21:12.608168 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj"] Feb 03 12:21:12 crc kubenswrapper[4679]: W0203 12:21:12.615926 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a9d5bd_ce1a_48df_8335_cb7c06ea40d5.slice/crio-f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6 WatchSource:0}: Error finding container f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6: Status 404 returned error can't find the container with id f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6 Feb 03 12:21:13 crc kubenswrapper[4679]: I0203 12:21:13.485053 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerStarted","Data":"9e94ef038da3e883a52ee27dce6861d55a6e3e858785782825de91aa182ac3db"} Feb 03 12:21:13 crc kubenswrapper[4679]: I0203 12:21:13.485104 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerStarted","Data":"f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6"} Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.133105 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.133437 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x2ffk" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="registry-server" containerID="cri-o://44c170eaa1a0f61c855df82034e377a004e5107e50e835fc03fb8ca6ed2d69bc" gracePeriod=2 Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.284673 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.289041 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.369256 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.501203 4679 generic.go:334] "Generic (PLEG): container finished" podID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerID="9e94ef038da3e883a52ee27dce6861d55a6e3e858785782825de91aa182ac3db" exitCode=0 Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.501317 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerDied","Data":"9e94ef038da3e883a52ee27dce6861d55a6e3e858785782825de91aa182ac3db"} Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.512711 4679 generic.go:334] "Generic (PLEG): container finished" podID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerID="44c170eaa1a0f61c855df82034e377a004e5107e50e835fc03fb8ca6ed2d69bc" exitCode=0 Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.513745 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerDied","Data":"44c170eaa1a0f61c855df82034e377a004e5107e50e835fc03fb8ca6ed2d69bc"} Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.576888 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.651160 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.768507 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltksf\" (UniqueName: \"kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf\") pod \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.768587 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content\") pod \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.768628 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities\") pod \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\" (UID: \"ab74d592-39b0-427b-b5d3-e0b53f3b189a\") " Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.773641 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities" (OuterVolumeSpecName: "utilities") pod "ab74d592-39b0-427b-b5d3-e0b53f3b189a" (UID: "ab74d592-39b0-427b-b5d3-e0b53f3b189a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.776457 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf" (OuterVolumeSpecName: "kube-api-access-ltksf") pod "ab74d592-39b0-427b-b5d3-e0b53f3b189a" (UID: "ab74d592-39b0-427b-b5d3-e0b53f3b189a"). InnerVolumeSpecName "kube-api-access-ltksf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.831849 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab74d592-39b0-427b-b5d3-e0b53f3b189a" (UID: "ab74d592-39b0-427b-b5d3-e0b53f3b189a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.869990 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltksf\" (UniqueName: \"kubernetes.io/projected/ab74d592-39b0-427b-b5d3-e0b53f3b189a-kube-api-access-ltksf\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.870535 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:14 crc kubenswrapper[4679]: I0203 12:21:14.870552 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab74d592-39b0-427b-b5d3-e0b53f3b189a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.274860 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.274950 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.343256 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.522197 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2ffk" event={"ID":"ab74d592-39b0-427b-b5d3-e0b53f3b189a","Type":"ContainerDied","Data":"6cbc30bd6961a2ac9151296c7f229f9d2ad4be9d819d1be96545939233d6f939"} Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.522258 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2ffk" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.522274 4679 scope.go:117] "RemoveContainer" containerID="44c170eaa1a0f61c855df82034e377a004e5107e50e835fc03fb8ca6ed2d69bc" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.526082 4679 generic.go:334] "Generic (PLEG): container finished" podID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerID="3d2ee3a562b2bc0056c501197c9106fbba53ef23d7a2c9b56e621a8c632c9988" exitCode=0 Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.527453 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerDied","Data":"3d2ee3a562b2bc0056c501197c9106fbba53ef23d7a2c9b56e621a8c632c9988"} Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.550590 4679 scope.go:117] "RemoveContainer" containerID="c804409fabc94a0e9423fb111a8a281aa6d887287275dc03c8e87a4086eaeda4" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.565196 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.575208 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x2ffk"] Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.582634 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:15 crc kubenswrapper[4679]: I0203 12:21:15.583080 4679 scope.go:117] "RemoveContainer" containerID="c891a77b31954f0657457b6b302f319575abfc4c138b7ac25e490387cab38a08" Feb 03 12:21:16 crc kubenswrapper[4679]: I0203 12:21:16.220401 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" path="/var/lib/kubelet/pods/ab74d592-39b0-427b-b5d3-e0b53f3b189a/volumes" Feb 03 12:21:16 crc kubenswrapper[4679]: I0203 12:21:16.534476 4679 generic.go:334] "Generic (PLEG): container finished" podID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerID="9c2abe3f494786de05839543df1ee7cefd8b34e0a957abcb5a897c421465c09b" exitCode=0 Feb 03 12:21:16 crc kubenswrapper[4679]: I0203 12:21:16.534576 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerDied","Data":"9c2abe3f494786de05839543df1ee7cefd8b34e0a957abcb5a897c421465c09b"} Feb 03 12:21:17 crc kubenswrapper[4679]: I0203 12:21:17.332671 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:17 crc kubenswrapper[4679]: I0203 12:21:17.543478 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7fqk2" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="registry-server" containerID="cri-o://8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f" gracePeriod=2 Feb 03 12:21:17 crc kubenswrapper[4679]: I0203 12:21:17.902072 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.022011 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util\") pod \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.022077 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwsz\" (UniqueName: \"kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz\") pod \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.022111 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle\") pod \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\" (UID: \"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.023190 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle" (OuterVolumeSpecName: "bundle") pod "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" (UID: "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.038944 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz" (OuterVolumeSpecName: "kube-api-access-xnwsz") pod "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" (UID: "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5"). InnerVolumeSpecName "kube-api-access-xnwsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.051654 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util" (OuterVolumeSpecName: "util") pod "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" (UID: "97a9d5bd-ce1a-48df-8335-cb7c06ea40d5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.099334 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.123576 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2hgn\" (UniqueName: \"kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn\") pod \"118753fe-f98a-427f-912a-a263b41c5056\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.123804 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content\") pod \"118753fe-f98a-427f-912a-a263b41c5056\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.123873 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities\") pod \"118753fe-f98a-427f-912a-a263b41c5056\" (UID: \"118753fe-f98a-427f-912a-a263b41c5056\") " Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.124222 4679 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-util\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.124250 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwsz\" (UniqueName: \"kubernetes.io/projected/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-kube-api-access-xnwsz\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.124265 4679 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97a9d5bd-ce1a-48df-8335-cb7c06ea40d5-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.125229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities" (OuterVolumeSpecName: "utilities") pod "118753fe-f98a-427f-912a-a263b41c5056" (UID: "118753fe-f98a-427f-912a-a263b41c5056"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.127669 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn" (OuterVolumeSpecName: "kube-api-access-p2hgn") pod "118753fe-f98a-427f-912a-a263b41c5056" (UID: "118753fe-f98a-427f-912a-a263b41c5056"). InnerVolumeSpecName "kube-api-access-p2hgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.178653 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "118753fe-f98a-427f-912a-a263b41c5056" (UID: "118753fe-f98a-427f-912a-a263b41c5056"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.228304 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.228499 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/118753fe-f98a-427f-912a-a263b41c5056-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.228521 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2hgn\" (UniqueName: \"kubernetes.io/projected/118753fe-f98a-427f-912a-a263b41c5056-kube-api-access-p2hgn\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.553429 4679 generic.go:334] "Generic (PLEG): container finished" podID="118753fe-f98a-427f-912a-a263b41c5056" containerID="8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f" exitCode=0 Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.553550 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fqk2" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.553569 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerDied","Data":"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f"} Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.553652 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fqk2" event={"ID":"118753fe-f98a-427f-912a-a263b41c5056","Type":"ContainerDied","Data":"a2ee194e279310c14b8c96d20392a522a45cbf5d5703744f2403e1a4ffc2532d"} Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.553679 4679 scope.go:117] "RemoveContainer" containerID="8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.556392 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" event={"ID":"97a9d5bd-ce1a-48df-8335-cb7c06ea40d5","Type":"ContainerDied","Data":"f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6"} Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.556443 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d405c6bf8284e8e7d1262a39be0f7f84a82f8b91edbbea1f25d08cd0640bd6" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.556452 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.577230 4679 scope.go:117] "RemoveContainer" containerID="6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.579641 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.585396 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7fqk2"] Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.596038 4679 scope.go:117] "RemoveContainer" containerID="e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.614753 4679 scope.go:117] "RemoveContainer" containerID="8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f" Feb 03 12:21:18 crc kubenswrapper[4679]: E0203 12:21:18.615332 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f\": container with ID starting with 8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f not found: ID does not exist" containerID="8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.615394 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f"} err="failed to get container status \"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f\": rpc error: code = NotFound desc = could not find container \"8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f\": container with ID starting with 8c1b6b3629bfb236461176f18472fe0d51ec7a0c8a621294852e193e65e7496f not found: ID does not exist" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.615429 4679 scope.go:117] "RemoveContainer" containerID="6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797" Feb 03 12:21:18 crc kubenswrapper[4679]: E0203 12:21:18.616018 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797\": container with ID starting with 6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797 not found: ID does not exist" containerID="6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.616111 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797"} err="failed to get container status \"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797\": rpc error: code = NotFound desc = could not find container \"6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797\": container with ID starting with 6b770eba449511e06cdaa62a46e37a5fe3c76470ec20a1ebb5134c1ed9f8f797 not found: ID does not exist" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.616191 4679 scope.go:117] "RemoveContainer" containerID="e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e" Feb 03 12:21:18 crc kubenswrapper[4679]: E0203 12:21:18.616717 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e\": container with ID starting with e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e not found: ID does not exist" containerID="e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.616756 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e"} err="failed to get container status \"e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e\": rpc error: code = NotFound desc = could not find container \"e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e\": container with ID starting with e04ff1a250f9f7f27e1ea689bd4bde40c8894f0949018c6ffeea16bd954e263e not found: ID does not exist" Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.933387 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:18 crc kubenswrapper[4679]: I0203 12:21:18.934173 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gj5zg" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="registry-server" containerID="cri-o://e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b" gracePeriod=2 Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.314730 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.346691 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content\") pod \"4c8149aa-107e-456f-bbc4-559a485542ea\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.346754 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities\") pod \"4c8149aa-107e-456f-bbc4-559a485542ea\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.346900 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9pjn\" (UniqueName: \"kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn\") pod \"4c8149aa-107e-456f-bbc4-559a485542ea\" (UID: \"4c8149aa-107e-456f-bbc4-559a485542ea\") " Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.347687 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities" (OuterVolumeSpecName: "utilities") pod "4c8149aa-107e-456f-bbc4-559a485542ea" (UID: "4c8149aa-107e-456f-bbc4-559a485542ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.351277 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn" (OuterVolumeSpecName: "kube-api-access-b9pjn") pod "4c8149aa-107e-456f-bbc4-559a485542ea" (UID: "4c8149aa-107e-456f-bbc4-559a485542ea"). InnerVolumeSpecName "kube-api-access-b9pjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.374632 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c8149aa-107e-456f-bbc4-559a485542ea" (UID: "4c8149aa-107e-456f-bbc4-559a485542ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.448851 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.448894 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c8149aa-107e-456f-bbc4-559a485542ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.448905 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9pjn\" (UniqueName: \"kubernetes.io/projected/4c8149aa-107e-456f-bbc4-559a485542ea-kube-api-access-b9pjn\") on node \"crc\" DevicePath \"\"" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.565274 4679 generic.go:334] "Generic (PLEG): container finished" podID="4c8149aa-107e-456f-bbc4-559a485542ea" containerID="e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b" exitCode=0 Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.565382 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerDied","Data":"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b"} Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.565423 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj5zg" event={"ID":"4c8149aa-107e-456f-bbc4-559a485542ea","Type":"ContainerDied","Data":"32ed2113b7117cb3153228da7212ab130112e96ad8d32258a8e40e1c92760b30"} Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.565449 4679 scope.go:117] "RemoveContainer" containerID="e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.565572 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj5zg" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.584251 4679 scope.go:117] "RemoveContainer" containerID="d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.599293 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.603675 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj5zg"] Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.607527 4679 scope.go:117] "RemoveContainer" containerID="09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.621676 4679 scope.go:117] "RemoveContainer" containerID="e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b" Feb 03 12:21:19 crc kubenswrapper[4679]: E0203 12:21:19.622279 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b\": container with ID starting with e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b not found: ID does not exist" containerID="e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.622341 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b"} err="failed to get container status \"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b\": rpc error: code = NotFound desc = could not find container \"e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b\": container with ID starting with e055bd839fb1153ef48c51f360f56382658b84aae4b53a51b585bffaf480875b not found: ID does not exist" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.622393 4679 scope.go:117] "RemoveContainer" containerID="d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e" Feb 03 12:21:19 crc kubenswrapper[4679]: E0203 12:21:19.623120 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e\": container with ID starting with d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e not found: ID does not exist" containerID="d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.623166 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e"} err="failed to get container status \"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e\": rpc error: code = NotFound desc = could not find container \"d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e\": container with ID starting with d0661e07f30ee96e87e77a5349d4c4e806602cd122f7da8eabc3c687a763768e not found: ID does not exist" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.623192 4679 scope.go:117] "RemoveContainer" containerID="09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26" Feb 03 12:21:19 crc kubenswrapper[4679]: E0203 12:21:19.623489 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26\": container with ID starting with 09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26 not found: ID does not exist" containerID="09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26" Feb 03 12:21:19 crc kubenswrapper[4679]: I0203 12:21:19.623518 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26"} err="failed to get container status \"09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26\": rpc error: code = NotFound desc = could not find container \"09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26\": container with ID starting with 09bcbf4065625d8328c0719f0838de3722ae7dcfb45802ba76268f874e9b0b26 not found: ID does not exist" Feb 03 12:21:20 crc kubenswrapper[4679]: I0203 12:21:20.232584 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118753fe-f98a-427f-912a-a263b41c5056" path="/var/lib/kubelet/pods/118753fe-f98a-427f-912a-a263b41c5056/volumes" Feb 03 12:21:20 crc kubenswrapper[4679]: I0203 12:21:20.233288 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" path="/var/lib/kubelet/pods/4c8149aa-107e-456f-bbc4-559a485542ea/volumes" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.044401 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz"] Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045054 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045072 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045084 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045093 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045103 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045111 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045123 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045130 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045139 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045146 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045161 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045168 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045179 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="extract" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045186 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="extract" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045200 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="util" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045207 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="util" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045221 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045228 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045239 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="pull" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045246 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="pull" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045255 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045264 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="extract-content" Feb 03 12:21:23 crc kubenswrapper[4679]: E0203 12:21:23.045277 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045286 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="extract-utilities" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045451 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a9d5bd-ce1a-48df-8335-cb7c06ea40d5" containerName="extract" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045468 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8149aa-107e-456f-bbc4-559a485542ea" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045484 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab74d592-39b0-427b-b5d3-e0b53f3b189a" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.045500 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="118753fe-f98a-427f-912a-a263b41c5056" containerName="registry-server" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.046161 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.048514 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xzssd" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.068868 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz"] Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.111275 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/52069189-49bf-46cc-b13d-b7705a4e68f1-kube-api-access-9vln2\") pod \"openstack-operator-controller-init-68c5f5659f-77cqz\" (UID: \"52069189-49bf-46cc-b13d-b7705a4e68f1\") " pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.212575 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/52069189-49bf-46cc-b13d-b7705a4e68f1-kube-api-access-9vln2\") pod \"openstack-operator-controller-init-68c5f5659f-77cqz\" (UID: \"52069189-49bf-46cc-b13d-b7705a4e68f1\") " pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.233548 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/52069189-49bf-46cc-b13d-b7705a4e68f1-kube-api-access-9vln2\") pod \"openstack-operator-controller-init-68c5f5659f-77cqz\" (UID: \"52069189-49bf-46cc-b13d-b7705a4e68f1\") " pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.365706 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:23 crc kubenswrapper[4679]: I0203 12:21:23.618041 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz"] Feb 03 12:21:23 crc kubenswrapper[4679]: W0203 12:21:23.624887 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52069189_49bf_46cc_b13d_b7705a4e68f1.slice/crio-c2e2579ace6f3316c7becafc3189cfc6dff3763534dce27ab0235f1be11e9595 WatchSource:0}: Error finding container c2e2579ace6f3316c7becafc3189cfc6dff3763534dce27ab0235f1be11e9595: Status 404 returned error can't find the container with id c2e2579ace6f3316c7becafc3189cfc6dff3763534dce27ab0235f1be11e9595 Feb 03 12:21:24 crc kubenswrapper[4679]: I0203 12:21:24.621865 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" event={"ID":"52069189-49bf-46cc-b13d-b7705a4e68f1","Type":"ContainerStarted","Data":"c2e2579ace6f3316c7becafc3189cfc6dff3763534dce27ab0235f1be11e9595"} Feb 03 12:21:28 crc kubenswrapper[4679]: I0203 12:21:28.680270 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" event={"ID":"52069189-49bf-46cc-b13d-b7705a4e68f1","Type":"ContainerStarted","Data":"5a4e08989838c661b6c1b17fe0d80e6b95561ca0d31ebdfbd84b94db2f658163"} Feb 03 12:21:28 crc kubenswrapper[4679]: I0203 12:21:28.681105 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:28 crc kubenswrapper[4679]: I0203 12:21:28.727391 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" podStartSLOduration=1.6365552719999998 podStartE2EDuration="5.727352273s" podCreationTimestamp="2026-02-03 12:21:23 +0000 UTC" firstStartedPulling="2026-02-03 12:21:23.630537193 +0000 UTC m=+956.105433281" lastFinishedPulling="2026-02-03 12:21:27.721334194 +0000 UTC m=+960.196230282" observedRunningTime="2026-02-03 12:21:28.724593691 +0000 UTC m=+961.199489779" watchObservedRunningTime="2026-02-03 12:21:28.727352273 +0000 UTC m=+961.202248361" Feb 03 12:21:33 crc kubenswrapper[4679]: I0203 12:21:33.369022 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68c5f5659f-77cqz" Feb 03 12:21:36 crc kubenswrapper[4679]: I0203 12:21:36.736323 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:21:36 crc kubenswrapper[4679]: I0203 12:21:36.736759 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.432220 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.434528 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.437447 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.439527 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pvgzh" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.443710 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.446729 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n2fdt" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.448434 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.492438 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.493719 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.497804 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.502900 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-vcb5f" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.542004 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.542963 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.546897 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7frmz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.554770 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.560156 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.561349 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.573265 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hczbz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.575244 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnr6l\" (UniqueName: \"kubernetes.io/projected/d39a188d-08b7-4670-a5da-c65da1b30936-kube-api-access-lnr6l\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-xhb56\" (UID: \"d39a188d-08b7-4670-a5da-c65da1b30936\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.575315 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbg4\" (UniqueName: \"kubernetes.io/projected/3722274c-5a6f-49ef-89ac-06fc5afd3098-kube-api-access-vjbg4\") pod \"cinder-operator-controller-manager-8d874c8fc-6l9l6\" (UID: \"3722274c-5a6f-49ef-89ac-06fc5afd3098\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.576659 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t74s\" (UniqueName: \"kubernetes.io/projected/d96d5316-a678-427e-aa6f-a606876142d3-kube-api-access-4t74s\") pod \"designate-operator-controller-manager-6d9697b7f4-x9kws\" (UID: \"d96d5316-a678-427e-aa6f-a606876142d3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.598554 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.608926 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.610074 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.614816 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.619971 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t9cpl" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.622115 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.638584 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.639628 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.649302 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.650676 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qj766" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.660327 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.666011 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.679173 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.684977 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cv2f4" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.687903 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.689013 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgq2f\" (UniqueName: \"kubernetes.io/projected/ee886e3f-df4d-43e4-b1ad-8eec77ead216-kube-api-access-jgq2f\") pod \"horizon-operator-controller-manager-5fb775575f-7p976\" (UID: \"ee886e3f-df4d-43e4-b1ad-8eec77ead216\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.689100 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25j7j\" (UniqueName: \"kubernetes.io/projected/9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4-kube-api-access-25j7j\") pod \"heat-operator-controller-manager-69d6db494d-k5gcz\" (UID: \"9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.689248 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnr6l\" (UniqueName: \"kubernetes.io/projected/d39a188d-08b7-4670-a5da-c65da1b30936-kube-api-access-lnr6l\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-xhb56\" (UID: \"d39a188d-08b7-4670-a5da-c65da1b30936\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.689772 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbg4\" (UniqueName: \"kubernetes.io/projected/3722274c-5a6f-49ef-89ac-06fc5afd3098-kube-api-access-vjbg4\") pod \"cinder-operator-controller-manager-8d874c8fc-6l9l6\" (UID: \"3722274c-5a6f-49ef-89ac-06fc5afd3098\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.689896 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvd22\" (UniqueName: \"kubernetes.io/projected/e92384fd-2d3b-4ba9-b265-92dbc9941750-kube-api-access-wvd22\") pod \"glance-operator-controller-manager-8886f4c47-vgbcs\" (UID: \"e92384fd-2d3b-4ba9-b265-92dbc9941750\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.690017 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t74s\" (UniqueName: \"kubernetes.io/projected/d96d5316-a678-427e-aa6f-a606876142d3-kube-api-access-4t74s\") pod \"designate-operator-controller-manager-6d9697b7f4-x9kws\" (UID: \"d96d5316-a678-427e-aa6f-a606876142d3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.694882 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.698676 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ft8dj" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.758758 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.763958 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t74s\" (UniqueName: \"kubernetes.io/projected/d96d5316-a678-427e-aa6f-a606876142d3-kube-api-access-4t74s\") pod \"designate-operator-controller-manager-6d9697b7f4-x9kws\" (UID: \"d96d5316-a678-427e-aa6f-a606876142d3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.779362 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.780726 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.783918 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbg4\" (UniqueName: \"kubernetes.io/projected/3722274c-5a6f-49ef-89ac-06fc5afd3098-kube-api-access-vjbg4\") pod \"cinder-operator-controller-manager-8d874c8fc-6l9l6\" (UID: \"3722274c-5a6f-49ef-89ac-06fc5afd3098\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.785669 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnr6l\" (UniqueName: \"kubernetes.io/projected/d39a188d-08b7-4670-a5da-c65da1b30936-kube-api-access-lnr6l\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-xhb56\" (UID: \"d39a188d-08b7-4670-a5da-c65da1b30936\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.793008 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-77tr2" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.793827 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796501 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzn4q\" (UniqueName: \"kubernetes.io/projected/2de6e912-5456-4209-85d7-2bddcedc0384-kube-api-access-mzn4q\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796605 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgq2f\" (UniqueName: \"kubernetes.io/projected/ee886e3f-df4d-43e4-b1ad-8eec77ead216-kube-api-access-jgq2f\") pod \"horizon-operator-controller-manager-5fb775575f-7p976\" (UID: \"ee886e3f-df4d-43e4-b1ad-8eec77ead216\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796634 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25j7j\" (UniqueName: \"kubernetes.io/projected/9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4-kube-api-access-25j7j\") pod \"heat-operator-controller-manager-69d6db494d-k5gcz\" (UID: \"9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796665 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5ws\" (UniqueName: \"kubernetes.io/projected/b498e6cd-6f07-461f-bf7a-5842461cbbbe-kube-api-access-dx5ws\") pod \"keystone-operator-controller-manager-84f48565d4-h77pz\" (UID: \"b498e6cd-6f07-461f-bf7a-5842461cbbbe\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796689 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbv5\" (UniqueName: \"kubernetes.io/projected/36b08aa8-071f-4862-821c-9ee85afcdf8e-kube-api-access-8rbv5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pwgd6\" (UID: \"36b08aa8-071f-4862-821c-9ee85afcdf8e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796744 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.796777 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvd22\" (UniqueName: \"kubernetes.io/projected/e92384fd-2d3b-4ba9-b265-92dbc9941750-kube-api-access-wvd22\") pod \"glance-operator-controller-manager-8886f4c47-vgbcs\" (UID: \"e92384fd-2d3b-4ba9-b265-92dbc9941750\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.815518 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.815707 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.817428 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.826069 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.831806 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-82tfm" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.836573 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvd22\" (UniqueName: \"kubernetes.io/projected/e92384fd-2d3b-4ba9-b265-92dbc9941750-kube-api-access-wvd22\") pod \"glance-operator-controller-manager-8886f4c47-vgbcs\" (UID: \"e92384fd-2d3b-4ba9-b265-92dbc9941750\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.836809 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.837806 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.838945 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25j7j\" (UniqueName: \"kubernetes.io/projected/9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4-kube-api-access-25j7j\") pod \"heat-operator-controller-manager-69d6db494d-k5gcz\" (UID: \"9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.840632 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-58sgp" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.841133 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.844114 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgq2f\" (UniqueName: \"kubernetes.io/projected/ee886e3f-df4d-43e4-b1ad-8eec77ead216-kube-api-access-jgq2f\") pod \"horizon-operator-controller-manager-5fb775575f-7p976\" (UID: \"ee886e3f-df4d-43e4-b1ad-8eec77ead216\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.883476 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.884783 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.896336 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897661 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897737 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvx45\" (UniqueName: \"kubernetes.io/projected/a0fa5212-9380-4d21-a8ae-a400eb674de3-kube-api-access-jvx45\") pod \"mariadb-operator-controller-manager-67bf948998-nvx58\" (UID: \"a0fa5212-9380-4d21-a8ae-a400eb674de3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897797 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzn4q\" (UniqueName: \"kubernetes.io/projected/2de6e912-5456-4209-85d7-2bddcedc0384-kube-api-access-mzn4q\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897825 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gf42\" (UniqueName: \"kubernetes.io/projected/6d552366-fc97-4365-8abd-5b32b28a09b2-kube-api-access-6gf42\") pod \"neutron-operator-controller-manager-585dbc889-8gc44\" (UID: \"6d552366-fc97-4365-8abd-5b32b28a09b2\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897863 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxsx\" (UniqueName: \"kubernetes.io/projected/ee3e0d19-7d26-4e63-8859-f1a2596a0ba5-kube-api-access-lzxsx\") pod \"manila-operator-controller-manager-7dd968899f-m6jbm\" (UID: \"ee3e0d19-7d26-4e63-8859-f1a2596a0ba5\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897896 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5ws\" (UniqueName: \"kubernetes.io/projected/b498e6cd-6f07-461f-bf7a-5842461cbbbe-kube-api-access-dx5ws\") pod \"keystone-operator-controller-manager-84f48565d4-h77pz\" (UID: \"b498e6cd-6f07-461f-bf7a-5842461cbbbe\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.897927 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbv5\" (UniqueName: \"kubernetes.io/projected/36b08aa8-071f-4862-821c-9ee85afcdf8e-kube-api-access-8rbv5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pwgd6\" (UID: \"36b08aa8-071f-4862-821c-9ee85afcdf8e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:21:51 crc kubenswrapper[4679]: E0203 12:21:51.898405 4679 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:51 crc kubenswrapper[4679]: E0203 12:21:51.898460 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert podName:2de6e912-5456-4209-85d7-2bddcedc0384 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:52.398437802 +0000 UTC m=+984.873333890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert") pod "infra-operator-controller-manager-79955696d6-vgg4d" (UID: "2de6e912-5456-4209-85d7-2bddcedc0384") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.899759 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.901532 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.912887 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.923332 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.924690 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.928613 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zw6sr" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.929317 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8h4ng" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.929461 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.936317 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.972058 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzn4q\" (UniqueName: \"kubernetes.io/projected/2de6e912-5456-4209-85d7-2bddcedc0384-kube-api-access-mzn4q\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.975100 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbv5\" (UniqueName: \"kubernetes.io/projected/36b08aa8-071f-4862-821c-9ee85afcdf8e-kube-api-access-8rbv5\") pod \"ironic-operator-controller-manager-5f4b8bd54d-pwgd6\" (UID: \"36b08aa8-071f-4862-821c-9ee85afcdf8e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.984862 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5ws\" (UniqueName: \"kubernetes.io/projected/b498e6cd-6f07-461f-bf7a-5842461cbbbe-kube-api-access-dx5ws\") pod \"keystone-operator-controller-manager-84f48565d4-h77pz\" (UID: \"b498e6cd-6f07-461f-bf7a-5842461cbbbe\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.994908 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x"] Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.999363 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfj7q\" (UniqueName: \"kubernetes.io/projected/e25213d7-4c75-46b8-b39b-44e75557c434-kube-api-access-jfj7q\") pod \"octavia-operator-controller-manager-6687f8d877-4pvxk\" (UID: \"e25213d7-4c75-46b8-b39b-44e75557c434\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.999454 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdmh\" (UniqueName: \"kubernetes.io/projected/79b06c14-7e75-4306-8001-3217809de327-kube-api-access-jkdmh\") pod \"nova-operator-controller-manager-55bff696bd-bllmz\" (UID: \"79b06c14-7e75-4306-8001-3217809de327\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.999515 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gf42\" (UniqueName: \"kubernetes.io/projected/6d552366-fc97-4365-8abd-5b32b28a09b2-kube-api-access-6gf42\") pod \"neutron-operator-controller-manager-585dbc889-8gc44\" (UID: \"6d552366-fc97-4365-8abd-5b32b28a09b2\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.999550 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxsx\" (UniqueName: \"kubernetes.io/projected/ee3e0d19-7d26-4e63-8859-f1a2596a0ba5-kube-api-access-lzxsx\") pod \"manila-operator-controller-manager-7dd968899f-m6jbm\" (UID: \"ee3e0d19-7d26-4e63-8859-f1a2596a0ba5\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:21:51 crc kubenswrapper[4679]: I0203 12:21:51.999624 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvx45\" (UniqueName: \"kubernetes.io/projected/a0fa5212-9380-4d21-a8ae-a400eb674de3-kube-api-access-jvx45\") pod \"mariadb-operator-controller-manager-67bf948998-nvx58\" (UID: \"a0fa5212-9380-4d21-a8ae-a400eb674de3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.004141 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.007016 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.008176 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m4cdm" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.015224 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-42stf"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.016289 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.017168 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.028341 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lkw4j" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.064117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gf42\" (UniqueName: \"kubernetes.io/projected/6d552366-fc97-4365-8abd-5b32b28a09b2-kube-api-access-6gf42\") pod \"neutron-operator-controller-manager-585dbc889-8gc44\" (UID: \"6d552366-fc97-4365-8abd-5b32b28a09b2\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.064551 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvx45\" (UniqueName: \"kubernetes.io/projected/a0fa5212-9380-4d21-a8ae-a400eb674de3-kube-api-access-jvx45\") pod \"mariadb-operator-controller-manager-67bf948998-nvx58\" (UID: \"a0fa5212-9380-4d21-a8ae-a400eb674de3\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.065078 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.076532 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.086132 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxsx\" (UniqueName: \"kubernetes.io/projected/ee3e0d19-7d26-4e63-8859-f1a2596a0ba5-kube-api-access-lzxsx\") pod \"manila-operator-controller-manager-7dd968899f-m6jbm\" (UID: \"ee3e0d19-7d26-4e63-8859-f1a2596a0ba5\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.102104 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.102173 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgjx\" (UniqueName: \"kubernetes.io/projected/11b2dd9f-a9fc-427c-a2a2-744484f359b4-kube-api-access-gxgjx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.102213 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfj7q\" (UniqueName: \"kubernetes.io/projected/e25213d7-4c75-46b8-b39b-44e75557c434-kube-api-access-jfj7q\") pod \"octavia-operator-controller-manager-6687f8d877-4pvxk\" (UID: \"e25213d7-4c75-46b8-b39b-44e75557c434\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.102256 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh4lm\" (UniqueName: \"kubernetes.io/projected/1f76a687-e27f-4d78-aeea-c2faca503549-kube-api-access-mh4lm\") pod \"ovn-operator-controller-manager-788c46999f-42stf\" (UID: \"1f76a687-e27f-4d78-aeea-c2faca503549\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.102288 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdmh\" (UniqueName: \"kubernetes.io/projected/79b06c14-7e75-4306-8001-3217809de327-kube-api-access-jkdmh\") pod \"nova-operator-controller-manager-55bff696bd-bllmz\" (UID: \"79b06c14-7e75-4306-8001-3217809de327\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.114479 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-42stf"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.121761 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.137460 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.138636 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.154444 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.156067 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ljmxr" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.168309 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.172485 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.179594 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdmh\" (UniqueName: \"kubernetes.io/projected/79b06c14-7e75-4306-8001-3217809de327-kube-api-access-jkdmh\") pod \"nova-operator-controller-manager-55bff696bd-bllmz\" (UID: \"79b06c14-7e75-4306-8001-3217809de327\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.182560 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfj7q\" (UniqueName: \"kubernetes.io/projected/e25213d7-4c75-46b8-b39b-44e75557c434-kube-api-access-jfj7q\") pod \"octavia-operator-controller-manager-6687f8d877-4pvxk\" (UID: \"e25213d7-4c75-46b8-b39b-44e75557c434\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.185287 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.191278 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-p2t4p" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.204282 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttj7g\" (UniqueName: \"kubernetes.io/projected/6b1f821d-79a5-4fe4-bc8a-f850716781e7-kube-api-access-ttj7g\") pod \"placement-operator-controller-manager-5b964cf4cd-46lw2\" (UID: \"6b1f821d-79a5-4fe4-bc8a-f850716781e7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.204528 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.204629 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxgjx\" (UniqueName: \"kubernetes.io/projected/11b2dd9f-a9fc-427c-a2a2-744484f359b4-kube-api-access-gxgjx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.204715 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh4lm\" (UniqueName: \"kubernetes.io/projected/1f76a687-e27f-4d78-aeea-c2faca503549-kube-api-access-mh4lm\") pod \"ovn-operator-controller-manager-788c46999f-42stf\" (UID: \"1f76a687-e27f-4d78-aeea-c2faca503549\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.204805 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkwd\" (UniqueName: \"kubernetes.io/projected/35892343-44c5-4cfb-9061-0b0542d23b99-kube-api-access-hqkwd\") pod \"swift-operator-controller-manager-68fc8c869-vxkt7\" (UID: \"35892343-44c5-4cfb-9061-0b0542d23b99\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.205040 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.205161 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:52.705143872 +0000 UTC m=+985.180039950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.220523 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.251930 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.290394 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.295604 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.295649 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.293783 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh4lm\" (UniqueName: \"kubernetes.io/projected/1f76a687-e27f-4d78-aeea-c2faca503549-kube-api-access-mh4lm\") pod \"ovn-operator-controller-manager-788c46999f-42stf\" (UID: \"1f76a687-e27f-4d78-aeea-c2faca503549\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.307284 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.310343 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.333930 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.334989 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxgjx\" (UniqueName: \"kubernetes.io/projected/11b2dd9f-a9fc-427c-a2a2-744484f359b4-kube-api-access-gxgjx\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.370940 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.372636 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkwd\" (UniqueName: \"kubernetes.io/projected/35892343-44c5-4cfb-9061-0b0542d23b99-kube-api-access-hqkwd\") pod \"swift-operator-controller-manager-68fc8c869-vxkt7\" (UID: \"35892343-44c5-4cfb-9061-0b0542d23b99\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.373052 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttj7g\" (UniqueName: \"kubernetes.io/projected/6b1f821d-79a5-4fe4-bc8a-f850716781e7-kube-api-access-ttj7g\") pod \"placement-operator-controller-manager-5b964cf4cd-46lw2\" (UID: \"6b1f821d-79a5-4fe4-bc8a-f850716781e7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.373180 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.390175 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.391986 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8mlr9" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.392671 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cwq9h" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.433364 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.470757 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttj7g\" (UniqueName: \"kubernetes.io/projected/6b1f821d-79a5-4fe4-bc8a-f850716781e7-kube-api-access-ttj7g\") pod \"placement-operator-controller-manager-5b964cf4cd-46lw2\" (UID: \"6b1f821d-79a5-4fe4-bc8a-f850716781e7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.473851 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkwd\" (UniqueName: \"kubernetes.io/projected/35892343-44c5-4cfb-9061-0b0542d23b99-kube-api-access-hqkwd\") pod \"swift-operator-controller-manager-68fc8c869-vxkt7\" (UID: \"35892343-44c5-4cfb-9061-0b0542d23b99\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.486706 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.486846 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kgz\" (UniqueName: \"kubernetes.io/projected/dedc1caa-ae76-49df-818b-49e570c09a31-kube-api-access-f2kgz\") pod \"test-operator-controller-manager-56f8bfcd9f-89mxg\" (UID: \"dedc1caa-ae76-49df-818b-49e570c09a31\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.487132 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnncl\" (UniqueName: \"kubernetes.io/projected/8e3f82d2-bf0a-4203-80af-3b48711ad1f0-kube-api-access-lnncl\") pod \"telemetry-operator-controller-manager-64b5b76f97-wktnn\" (UID: \"8e3f82d2-bf0a-4203-80af-3b48711ad1f0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.487524 4679 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.487638 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert podName:2de6e912-5456-4209-85d7-2bddcedc0384 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:53.487619275 +0000 UTC m=+985.962515363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert") pod "infra-operator-controller-manager-79955696d6-vgg4d" (UID: "2de6e912-5456-4209-85d7-2bddcedc0384") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.532267 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.546800 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.580166 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.590468 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnncl\" (UniqueName: \"kubernetes.io/projected/8e3f82d2-bf0a-4203-80af-3b48711ad1f0-kube-api-access-lnncl\") pod \"telemetry-operator-controller-manager-64b5b76f97-wktnn\" (UID: \"8e3f82d2-bf0a-4203-80af-3b48711ad1f0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.592566 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kgz\" (UniqueName: \"kubernetes.io/projected/dedc1caa-ae76-49df-818b-49e570c09a31-kube-api-access-f2kgz\") pod \"test-operator-controller-manager-56f8bfcd9f-89mxg\" (UID: \"dedc1caa-ae76-49df-818b-49e570c09a31\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.623857 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-w6xc9"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.627106 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.633628 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-k8h2n" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.635685 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-w6xc9"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.805887 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.806333 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5w7\" (UniqueName: \"kubernetes.io/projected/3f6911aa-e91a-4ab6-b2cd-0c1a08977a57-kube-api-access-vf5w7\") pod \"watcher-operator-controller-manager-564965969-w6xc9\" (UID: \"3f6911aa-e91a-4ab6-b2cd-0c1a08977a57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.806091 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: E0203 12:21:52.809610 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:53.809572209 +0000 UTC m=+986.284468297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.838906 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kgz\" (UniqueName: \"kubernetes.io/projected/dedc1caa-ae76-49df-818b-49e570c09a31-kube-api-access-f2kgz\") pod \"test-operator-controller-manager-56f8bfcd9f-89mxg\" (UID: \"dedc1caa-ae76-49df-818b-49e570c09a31\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.839435 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnncl\" (UniqueName: \"kubernetes.io/projected/8e3f82d2-bf0a-4203-80af-3b48711ad1f0-kube-api-access-lnncl\") pod \"telemetry-operator-controller-manager-64b5b76f97-wktnn\" (UID: \"8e3f82d2-bf0a-4203-80af-3b48711ad1f0\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.936978 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5w7\" (UniqueName: \"kubernetes.io/projected/3f6911aa-e91a-4ab6-b2cd-0c1a08977a57-kube-api-access-vf5w7\") pod \"watcher-operator-controller-manager-564965969-w6xc9\" (UID: \"3f6911aa-e91a-4ab6-b2cd-0c1a08977a57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.972483 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.972529 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.974024 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.993886 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.995127 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.995322 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wgdq2" Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.997672 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p"] Feb 03 12:21:52 crc kubenswrapper[4679]: I0203 12:21:52.998921 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.004071 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5w7\" (UniqueName: \"kubernetes.io/projected/3f6911aa-e91a-4ab6-b2cd-0c1a08977a57-kube-api-access-vf5w7\") pod \"watcher-operator-controller-manager-564965969-w6xc9\" (UID: \"3f6911aa-e91a-4ab6-b2cd-0c1a08977a57\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.015254 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jtpnk" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.034142 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.036122 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p"] Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.037813 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.037875 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj7hk\" (UniqueName: \"kubernetes.io/projected/ebf666dd-6b96-4907-8024-800d9634590f-kube-api-access-vj7hk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ddt7p\" (UID: \"ebf666dd-6b96-4907-8024-800d9634590f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.037923 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xnn\" (UniqueName: \"kubernetes.io/projected/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-kube-api-access-c4xnn\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.037978 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.054668 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf"] Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.075170 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.142142 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xnn\" (UniqueName: \"kubernetes.io/projected/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-kube-api-access-c4xnn\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.142236 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.142297 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.142348 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj7hk\" (UniqueName: \"kubernetes.io/projected/ebf666dd-6b96-4907-8024-800d9634590f-kube-api-access-vj7hk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ddt7p\" (UID: \"ebf666dd-6b96-4907-8024-800d9634590f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.142787 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.142866 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:53.642841634 +0000 UTC m=+986.117737942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.143037 4679 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.143066 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:53.643058669 +0000 UTC m=+986.117954987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.179815 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj7hk\" (UniqueName: \"kubernetes.io/projected/ebf666dd-6b96-4907-8024-800d9634590f-kube-api-access-vj7hk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ddt7p\" (UID: \"ebf666dd-6b96-4907-8024-800d9634590f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.212517 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.349162 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs"] Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.442154 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xnn\" (UniqueName: \"kubernetes.io/projected/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-kube-api-access-c4xnn\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.555599 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.555834 4679 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.555889 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert podName:2de6e912-5456-4209-85d7-2bddcedc0384 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:55.555872742 +0000 UTC m=+988.030768830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert") pod "infra-operator-controller-manager-79955696d6-vgg4d" (UID: "2de6e912-5456-4209-85d7-2bddcedc0384") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.657430 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.657828 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.658003 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.658065 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:54.658047324 +0000 UTC m=+987.132943412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.658496 4679 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.658537 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:54.658520326 +0000 UTC m=+987.133416414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.864430 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.864666 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: E0203 12:21:53.864727 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:55.864706537 +0000 UTC m=+988.339602625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:53 crc kubenswrapper[4679]: I0203 12:21:53.960921 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" event={"ID":"e92384fd-2d3b-4ba9-b265-92dbc9941750","Type":"ContainerStarted","Data":"b0760bc5e9073ea251a6c7f49151906e6eb6eef455fa35df04383536c90d9405"} Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.061821 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.093961 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.115484 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.123837 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.143396 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.177058 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.199890 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.208729 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.227526 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.368289 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.391000 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.399843 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.408672 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.420923 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm"] Feb 03 12:21:54 crc kubenswrapper[4679]: W0203 12:21:54.426847 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35892343_44c5_4cfb_9061_0b0542d23b99.slice/crio-9feb2147bed7cd3d462232312bfe28f1deba758ce125fb5188d9de434bfd1865 WatchSource:0}: Error finding container 9feb2147bed7cd3d462232312bfe28f1deba758ce125fb5188d9de434bfd1865: Status 404 returned error can't find the container with id 9feb2147bed7cd3d462232312bfe28f1deba758ce125fb5188d9de434bfd1865 Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.477784 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvx45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-nvx58_openstack-operators(a0fa5212-9380-4d21-a8ae-a400eb674de3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.480521 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh4lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-42stf_openstack-operators(1f76a687-e27f-4d78-aeea-c2faca503549): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.480611 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" podUID="a0fa5212-9380-4d21-a8ae-a400eb674de3" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.482194 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" podUID="1f76a687-e27f-4d78-aeea-c2faca503549" Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.517510 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2"] Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.539582 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttj7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-46lw2_openstack-operators(6b1f821d-79a5-4fe4-bc8a-f850716781e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.540018 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-wktnn_openstack-operators(8e3f82d2-bf0a-4203-80af-3b48711ad1f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.541965 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" podUID="8e3f82d2-bf0a-4203-80af-3b48711ad1f0" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.542106 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" podUID="6b1f821d-79a5-4fe4-bc8a-f850716781e7" Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.543685 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.546405 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-42stf"] Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.557866 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vj7hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ddt7p_openstack-operators(ebf666dd-6b96-4907-8024-800d9634590f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.558987 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" podUID="ebf666dd-6b96-4907-8024-800d9634590f" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.561518 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzxsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-m6jbm_openstack-operators(ee3e0d19-7d26-4e63-8859-f1a2596a0ba5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.563606 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" podUID="ee3e0d19-7d26-4e63-8859-f1a2596a0ba5" Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.568587 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-w6xc9"] Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.577751 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg"] Feb 03 12:21:54 crc kubenswrapper[4679]: W0203 12:21:54.579172 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddedc1caa_ae76_49df_818b_49e570c09a31.slice/crio-083cced6100720066bf1319cb66c16efe324c00b11efe93e8ba6b87d51576a5a WatchSource:0}: Error finding container 083cced6100720066bf1319cb66c16efe324c00b11efe93e8ba6b87d51576a5a: Status 404 returned error can't find the container with id 083cced6100720066bf1319cb66c16efe324c00b11efe93e8ba6b87d51576a5a Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.581670 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f2kgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-89mxg_openstack-operators(dedc1caa-ae76-49df-818b-49e570c09a31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.582822 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" podUID="dedc1caa-ae76-49df-818b-49e570c09a31" Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.684733 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.684834 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.685064 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.685125 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:56.685108557 +0000 UTC m=+989.160004645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.685842 4679 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.685972 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:56.685944658 +0000 UTC m=+989.160840746 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "webhook-server-cert" not found Feb 03 12:21:54 crc kubenswrapper[4679]: I0203 12:21:54.984684 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" event={"ID":"1f76a687-e27f-4d78-aeea-c2faca503549","Type":"ContainerStarted","Data":"123191bdab9e02d9eb7e9078a87b16643c8d603ab9f8bc7832df6e1e5458201f"} Feb 03 12:21:54 crc kubenswrapper[4679]: E0203 12:21:54.999044 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" podUID="1f76a687-e27f-4d78-aeea-c2faca503549" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.018826 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" event={"ID":"ee886e3f-df4d-43e4-b1ad-8eec77ead216","Type":"ContainerStarted","Data":"84bdc81008b4ab7dc53a7ac1ce165b5e554a617634565c9beba786940a1fe047"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.030531 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" event={"ID":"b498e6cd-6f07-461f-bf7a-5842461cbbbe","Type":"ContainerStarted","Data":"c3e5e3d84a0ad3686a41493e3663bcb2f684de654da7e0b6692031d62fe7dc25"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.046566 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" event={"ID":"79b06c14-7e75-4306-8001-3217809de327","Type":"ContainerStarted","Data":"15a98d094ec434ca71e92efb9f3fe2262b4c8e6477e5d2f81dec60cc2e92cf17"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.062652 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" event={"ID":"ebf666dd-6b96-4907-8024-800d9634590f","Type":"ContainerStarted","Data":"a02f9700309e77ad5b03b1f1363509f09d125b60355c8b5450b0fa4b02d800d7"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.070767 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" podUID="ebf666dd-6b96-4907-8024-800d9634590f" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.088602 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" event={"ID":"8e3f82d2-bf0a-4203-80af-3b48711ad1f0","Type":"ContainerStarted","Data":"d4e188683466c1834a4a3094edec6e1adcc4ebc12555fef08510ff07f2c505aa"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.090907 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" podUID="8e3f82d2-bf0a-4203-80af-3b48711ad1f0" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.108047 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" event={"ID":"35892343-44c5-4cfb-9061-0b0542d23b99","Type":"ContainerStarted","Data":"9feb2147bed7cd3d462232312bfe28f1deba758ce125fb5188d9de434bfd1865"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.128782 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" event={"ID":"9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4","Type":"ContainerStarted","Data":"adf9b7a30688f9dc7f29cb56528b0484b92a1a779270e60744ee1aedd5cfeb22"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.138869 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" event={"ID":"3f6911aa-e91a-4ab6-b2cd-0c1a08977a57","Type":"ContainerStarted","Data":"ded1b0b885cb9b2a6407b867202ea397dfe02b276037b5f2ff822cc51ef483a0"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.164610 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" event={"ID":"d96d5316-a678-427e-aa6f-a606876142d3","Type":"ContainerStarted","Data":"02a8e7f967e0c86d1ca7cd3dca21b53625b43fabf34d37859b695f24ec5cafd0"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.193714 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" event={"ID":"a0fa5212-9380-4d21-a8ae-a400eb674de3","Type":"ContainerStarted","Data":"fa942562acd6f0ffef72bcd57762a4c59cae87437c56ded6bd336a84bc2af5bc"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.237040 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" podUID="a0fa5212-9380-4d21-a8ae-a400eb674de3" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.267098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" event={"ID":"36b08aa8-071f-4862-821c-9ee85afcdf8e","Type":"ContainerStarted","Data":"840f91d81ec37c5ae4de75df6c79224c2751bb2c9e4b1ab70c16abff5be93a52"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.271349 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" event={"ID":"6b1f821d-79a5-4fe4-bc8a-f850716781e7","Type":"ContainerStarted","Data":"21c776035b10a3e30c959dc8f509c8026e2eb97c475213b81f2271544ccbc380"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.273196 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" podUID="6b1f821d-79a5-4fe4-bc8a-f850716781e7" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.273769 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" event={"ID":"6d552366-fc97-4365-8abd-5b32b28a09b2","Type":"ContainerStarted","Data":"4a0d0e18c853fe5aa21187ed951b9c46d75d3b006f6b699c43d9d4f0f71d50f9"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.279160 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" event={"ID":"3722274c-5a6f-49ef-89ac-06fc5afd3098","Type":"ContainerStarted","Data":"3452458a12c52d7d41292a3dc44f486f23cf1890bb69231f3b009f5c83156243"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.283044 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" event={"ID":"ee3e0d19-7d26-4e63-8859-f1a2596a0ba5","Type":"ContainerStarted","Data":"bc8ca5641d2e7d02a82d79e9e2e72e40cd2cb80e8333a92ffe0947ea77690c2c"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.288708 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" podUID="ee3e0d19-7d26-4e63-8859-f1a2596a0ba5" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.291283 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" event={"ID":"dedc1caa-ae76-49df-818b-49e570c09a31","Type":"ContainerStarted","Data":"083cced6100720066bf1319cb66c16efe324c00b11efe93e8ba6b87d51576a5a"} Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.293196 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" podUID="dedc1caa-ae76-49df-818b-49e570c09a31" Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.294838 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" event={"ID":"d39a188d-08b7-4670-a5da-c65da1b30936","Type":"ContainerStarted","Data":"7592326b735cbbd552ab43c9db6ed24cd154599252b071be694e61fff1112fdf"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.296097 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" event={"ID":"e25213d7-4c75-46b8-b39b-44e75557c434","Type":"ContainerStarted","Data":"3b5d0386a137baab5ad1ed17b6a7eef98b72dd06cda2c7483d9a33355fd16247"} Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.591822 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.592106 4679 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.592340 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert podName:2de6e912-5456-4209-85d7-2bddcedc0384 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:59.592320852 +0000 UTC m=+992.067216940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert") pod "infra-operator-controller-manager-79955696d6-vgg4d" (UID: "2de6e912-5456-4209-85d7-2bddcedc0384") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:55 crc kubenswrapper[4679]: I0203 12:21:55.898702 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.898891 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:55 crc kubenswrapper[4679]: E0203 12:21:55.899013 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:21:59.89898293 +0000 UTC m=+992.373879018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.317635 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" podUID="ebf666dd-6b96-4907-8024-800d9634590f" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.318547 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" podUID="ee3e0d19-7d26-4e63-8859-f1a2596a0ba5" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.318638 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" podUID="a0fa5212-9380-4d21-a8ae-a400eb674de3" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.318694 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" podUID="1f76a687-e27f-4d78-aeea-c2faca503549" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.318692 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" podUID="6b1f821d-79a5-4fe4-bc8a-f850716781e7" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.318743 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" podUID="8e3f82d2-bf0a-4203-80af-3b48711ad1f0" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.320460 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" podUID="dedc1caa-ae76-49df-818b-49e570c09a31" Feb 03 12:21:56 crc kubenswrapper[4679]: I0203 12:21:56.715110 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:56 crc kubenswrapper[4679]: I0203 12:21:56.715190 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.715346 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.715441 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:00.715415387 +0000 UTC m=+993.190311475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.715911 4679 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:21:56 crc kubenswrapper[4679]: E0203 12:21:56.715961 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:00.71594981 +0000 UTC m=+993.190845898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "webhook-server-cert" not found Feb 03 12:21:59 crc kubenswrapper[4679]: I0203 12:21:59.613612 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:21:59 crc kubenswrapper[4679]: E0203 12:21:59.613867 4679 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:59 crc kubenswrapper[4679]: E0203 12:21:59.614306 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert podName:2de6e912-5456-4209-85d7-2bddcedc0384 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:07.614283583 +0000 UTC m=+1000.089179671 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert") pod "infra-operator-controller-manager-79955696d6-vgg4d" (UID: "2de6e912-5456-4209-85d7-2bddcedc0384") : secret "infra-operator-webhook-server-cert" not found Feb 03 12:21:59 crc kubenswrapper[4679]: I0203 12:21:59.918834 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:21:59 crc kubenswrapper[4679]: E0203 12:21:59.919118 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:21:59 crc kubenswrapper[4679]: E0203 12:21:59.919258 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:07.919225507 +0000 UTC m=+1000.394121595 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:22:00 crc kubenswrapper[4679]: I0203 12:22:00.734679 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:00 crc kubenswrapper[4679]: I0203 12:22:00.735061 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:00 crc kubenswrapper[4679]: E0203 12:22:00.734846 4679 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 12:22:00 crc kubenswrapper[4679]: E0203 12:22:00.735186 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:08.73515398 +0000 UTC m=+1001.210050078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "webhook-server-cert" not found Feb 03 12:22:00 crc kubenswrapper[4679]: E0203 12:22:00.735285 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:22:00 crc kubenswrapper[4679]: E0203 12:22:00.735335 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:08.735323735 +0000 UTC m=+1001.210219823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:22:06 crc kubenswrapper[4679]: I0203 12:22:06.736123 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:22:06 crc kubenswrapper[4679]: I0203 12:22:06.737174 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:22:07 crc kubenswrapper[4679]: I0203 12:22:07.670799 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:22:07 crc kubenswrapper[4679]: I0203 12:22:07.691164 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de6e912-5456-4209-85d7-2bddcedc0384-cert\") pod \"infra-operator-controller-manager-79955696d6-vgg4d\" (UID: \"2de6e912-5456-4209-85d7-2bddcedc0384\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:22:07 crc kubenswrapper[4679]: I0203 12:22:07.867344 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:22:07 crc kubenswrapper[4679]: I0203 12:22:07.974935 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:07 crc kubenswrapper[4679]: E0203 12:22:07.975222 4679 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:22:07 crc kubenswrapper[4679]: E0203 12:22:07.975379 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert podName:11b2dd9f-a9fc-427c-a2a2-744484f359b4 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:23.975339863 +0000 UTC m=+1016.450235951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" (UID: "11b2dd9f-a9fc-427c-a2a2-744484f359b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.437155 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.437442 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25j7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-k5gcz_openstack-operators(9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.438640 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" podUID="9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4" Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.477025 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" podUID="9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4" Feb 03 12:22:08 crc kubenswrapper[4679]: I0203 12:22:08.791231 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:08 crc kubenswrapper[4679]: I0203 12:22:08.791317 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.791512 4679 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 12:22:08 crc kubenswrapper[4679]: E0203 12:22:08.791592 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs podName:632ab40b-9540-48ad-b1c7-7b5b1603e4d2 nodeName:}" failed. No retries permitted until 2026-02-03 12:22:24.791566906 +0000 UTC m=+1017.266462994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs") pod "openstack-operator-controller-manager-599dbc9849-9t5wf" (UID: "632ab40b-9540-48ad-b1c7-7b5b1603e4d2") : secret "metrics-server-cert" not found Feb 03 12:22:08 crc kubenswrapper[4679]: I0203 12:22:08.814029 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-webhook-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:09 crc kubenswrapper[4679]: E0203 12:22:09.965059 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 03 12:22:09 crc kubenswrapper[4679]: E0203 12:22:09.965847 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkdmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-bllmz_openstack-operators(79b06c14-7e75-4306-8001-3217809de327): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:22:09 crc kubenswrapper[4679]: E0203 12:22:09.967270 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" podUID="79b06c14-7e75-4306-8001-3217809de327" Feb 03 12:22:10 crc kubenswrapper[4679]: E0203 12:22:10.495074 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" podUID="79b06c14-7e75-4306-8001-3217809de327" Feb 03 12:22:13 crc kubenswrapper[4679]: I0203 12:22:13.361410 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d"] Feb 03 12:22:14 crc kubenswrapper[4679]: W0203 12:22:14.696799 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2de6e912_5456_4209_85d7_2bddcedc0384.slice/crio-93c1e3ae8eb87beae382e2ca13ba6bb1f0d6585e022bea64c36fdca5e2e675ff WatchSource:0}: Error finding container 93c1e3ae8eb87beae382e2ca13ba6bb1f0d6585e022bea64c36fdca5e2e675ff: Status 404 returned error can't find the container with id 93c1e3ae8eb87beae382e2ca13ba6bb1f0d6585e022bea64c36fdca5e2e675ff Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.530229 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" event={"ID":"2de6e912-5456-4209-85d7-2bddcedc0384","Type":"ContainerStarted","Data":"93c1e3ae8eb87beae382e2ca13ba6bb1f0d6585e022bea64c36fdca5e2e675ff"} Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.536096 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" event={"ID":"6d552366-fc97-4365-8abd-5b32b28a09b2","Type":"ContainerStarted","Data":"5a1fa30067e94db25eb9ea66c0b6c4465fca235ce0becbdb503355a9d70ac5d7"} Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.536286 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.539236 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" event={"ID":"d39a188d-08b7-4670-a5da-c65da1b30936","Type":"ContainerStarted","Data":"5d8e7714514dd79a83de67a78fb9a184a00e9c2650820871566386ce65b61f69"} Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.539305 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.557429 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" podStartSLOduration=8.343900302 podStartE2EDuration="24.557407714s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.352147359 +0000 UTC m=+986.827043447" lastFinishedPulling="2026-02-03 12:22:10.565654771 +0000 UTC m=+1003.040550859" observedRunningTime="2026-02-03 12:22:15.551849951 +0000 UTC m=+1008.026746039" watchObservedRunningTime="2026-02-03 12:22:15.557407714 +0000 UTC m=+1008.032303802" Feb 03 12:22:15 crc kubenswrapper[4679]: I0203 12:22:15.586786 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" podStartSLOduration=8.399851789 podStartE2EDuration="24.586742523s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.379148557 +0000 UTC m=+986.854044645" lastFinishedPulling="2026-02-03 12:22:10.566039291 +0000 UTC m=+1003.040935379" observedRunningTime="2026-02-03 12:22:15.580003989 +0000 UTC m=+1008.054900087" watchObservedRunningTime="2026-02-03 12:22:15.586742523 +0000 UTC m=+1008.061638611" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.560950 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" event={"ID":"ebf666dd-6b96-4907-8024-800d9634590f","Type":"ContainerStarted","Data":"a50967bc4e85e1c5f9a9d7fb8184784ff5d25fae7d84520f31ab56f5c280df10"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.576489 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" event={"ID":"a0fa5212-9380-4d21-a8ae-a400eb674de3","Type":"ContainerStarted","Data":"2892fd5867867ea8ac39fb419fd96440629f57486027559d71e64035cf045130"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.576703 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.582705 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" event={"ID":"e25213d7-4c75-46b8-b39b-44e75557c434","Type":"ContainerStarted","Data":"4b82ed1483928964c13b20c7b8537847f76f2358bf2d36febbf0454741ab5b9f"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.582990 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.587792 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" event={"ID":"ee3e0d19-7d26-4e63-8859-f1a2596a0ba5","Type":"ContainerStarted","Data":"36f368c3512d26b367fc0fe7beeac04e259a922fecb7e56f2a0e373289b54218"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.589037 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.598857 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" event={"ID":"3f6911aa-e91a-4ab6-b2cd-0c1a08977a57","Type":"ContainerStarted","Data":"94af218323e8919c50fdd863781e223661e510e0de596548d667538d3de25efc"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.599560 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.607985 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" event={"ID":"dedc1caa-ae76-49df-818b-49e570c09a31","Type":"ContainerStarted","Data":"b436f59f8a92fe336900a15223412ea3d9beab1762bd302861a711c9479a2312"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.609152 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.616333 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" event={"ID":"3722274c-5a6f-49ef-89ac-06fc5afd3098","Type":"ContainerStarted","Data":"4f688f1e5ddced223937c1d4338578a5f7b3312c2e92771d5a34265a505daa46"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.617182 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.625136 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" event={"ID":"35892343-44c5-4cfb-9061-0b0542d23b99","Type":"ContainerStarted","Data":"a34a737c5a48289a5b2a8d77a488bfa7ece5807b6020bcd2405e9709d58643bf"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.625528 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.639038 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" event={"ID":"d96d5316-a678-427e-aa6f-a606876142d3","Type":"ContainerStarted","Data":"344bae851b1c6cda0148cb2b4ddf006498559d07bb749e776da26f926004e15e"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.640052 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.657036 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" event={"ID":"8e3f82d2-bf0a-4203-80af-3b48711ad1f0","Type":"ContainerStarted","Data":"19485e1cb49a5a2bab9c5e499cdc3edac06d2a5aa40d4e9e0fc8dc73edc8ec80"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.658247 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.692257 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" event={"ID":"1f76a687-e27f-4d78-aeea-c2faca503549","Type":"ContainerStarted","Data":"d60b24e1ec7ed76ebf58aa5784c05e90ea762e79648a8f805e38f70efa21a554"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.693623 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.716807 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" event={"ID":"b498e6cd-6f07-461f-bf7a-5842461cbbbe","Type":"ContainerStarted","Data":"69a1ce30262fa466b5ba8e312b0eb1623e6168c6aba079d8c634bd9c4e4df3a0"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.717999 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.743126 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" event={"ID":"36b08aa8-071f-4862-821c-9ee85afcdf8e","Type":"ContainerStarted","Data":"20e9211d61030dc34df43432adac1295379fa7c6de6b63c7a656eaaeed17bf39"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.744195 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ddt7p" podStartSLOduration=4.015928443 podStartE2EDuration="24.744167816s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.557689213 +0000 UTC m=+987.032585301" lastFinishedPulling="2026-02-03 12:22:15.285928586 +0000 UTC m=+1007.760824674" observedRunningTime="2026-02-03 12:22:16.719058037 +0000 UTC m=+1009.193954125" watchObservedRunningTime="2026-02-03 12:22:16.744167816 +0000 UTC m=+1009.219063904" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.745182 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.775107 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" event={"ID":"6b1f821d-79a5-4fe4-bc8a-f850716781e7","Type":"ContainerStarted","Data":"2f492780a08ea95d3d17d70121976c15fe6d2fb5561fed0dee4c0278386b51a8"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.775944 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.785175 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" event={"ID":"e92384fd-2d3b-4ba9-b265-92dbc9941750","Type":"ContainerStarted","Data":"82f52a3d8f793394e837a0b870b513c63758b459395826f9d84151e417bffaf6"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.786057 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.788890 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" event={"ID":"ee886e3f-df4d-43e4-b1ad-8eec77ead216","Type":"ContainerStarted","Data":"e7871750ee37126013d123edc41383e80e5bad9b1fdea4eea96f7a2b63cef2ba"} Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.788932 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.860795 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" podStartSLOduration=7.830392076 podStartE2EDuration="25.860764441s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.351506582 +0000 UTC m=+986.826402680" lastFinishedPulling="2026-02-03 12:22:12.381878957 +0000 UTC m=+1004.856775045" observedRunningTime="2026-02-03 12:22:16.844720946 +0000 UTC m=+1009.319617034" watchObservedRunningTime="2026-02-03 12:22:16.860764441 +0000 UTC m=+1009.335660529" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.898760 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" podStartSLOduration=9.742320025 podStartE2EDuration="25.898733222s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.415342772 +0000 UTC m=+986.890238860" lastFinishedPulling="2026-02-03 12:22:10.571755969 +0000 UTC m=+1003.046652057" observedRunningTime="2026-02-03 12:22:16.888096337 +0000 UTC m=+1009.362992435" watchObservedRunningTime="2026-02-03 12:22:16.898733222 +0000 UTC m=+1009.373629310" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.954921 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" podStartSLOduration=4.268289198 podStartE2EDuration="24.954901405s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.539717708 +0000 UTC m=+987.014613796" lastFinishedPulling="2026-02-03 12:22:15.226329915 +0000 UTC m=+1007.701226003" observedRunningTime="2026-02-03 12:22:16.952638686 +0000 UTC m=+1009.427534774" watchObservedRunningTime="2026-02-03 12:22:16.954901405 +0000 UTC m=+1009.429797493" Feb 03 12:22:16 crc kubenswrapper[4679]: I0203 12:22:16.998227 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" podStartSLOduration=9.235314139 podStartE2EDuration="25.998198934s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.348557466 +0000 UTC m=+986.823453544" lastFinishedPulling="2026-02-03 12:22:11.111442241 +0000 UTC m=+1003.586338339" observedRunningTime="2026-02-03 12:22:16.990254768 +0000 UTC m=+1009.465150856" watchObservedRunningTime="2026-02-03 12:22:16.998198934 +0000 UTC m=+1009.473095022" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.028283 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" podStartSLOduration=9.268972299 podStartE2EDuration="26.028261621s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.351834331 +0000 UTC m=+986.826730419" lastFinishedPulling="2026-02-03 12:22:11.111123653 +0000 UTC m=+1003.586019741" observedRunningTime="2026-02-03 12:22:17.024492414 +0000 UTC m=+1009.499388502" watchObservedRunningTime="2026-02-03 12:22:17.028261621 +0000 UTC m=+1009.503157709" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.059120 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" podStartSLOduration=5.419188838 podStartE2EDuration="26.059099358s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.47792536 +0000 UTC m=+986.952821448" lastFinishedPulling="2026-02-03 12:22:15.11783588 +0000 UTC m=+1007.592731968" observedRunningTime="2026-02-03 12:22:17.051886562 +0000 UTC m=+1009.526782650" watchObservedRunningTime="2026-02-03 12:22:17.059099358 +0000 UTC m=+1009.533995446" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.107145 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" podStartSLOduration=5.549353264 podStartE2EDuration="26.10711824s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.561387838 +0000 UTC m=+987.036283926" lastFinishedPulling="2026-02-03 12:22:15.119152814 +0000 UTC m=+1007.594048902" observedRunningTime="2026-02-03 12:22:17.101990807 +0000 UTC m=+1009.576886895" watchObservedRunningTime="2026-02-03 12:22:17.10711824 +0000 UTC m=+1009.582014328" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.130947 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" podStartSLOduration=5.013654718 podStartE2EDuration="25.130921015s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.581521839 +0000 UTC m=+987.056417927" lastFinishedPulling="2026-02-03 12:22:14.698788126 +0000 UTC m=+1007.173684224" observedRunningTime="2026-02-03 12:22:17.124388156 +0000 UTC m=+1009.599284244" watchObservedRunningTime="2026-02-03 12:22:17.130921015 +0000 UTC m=+1009.605817113" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.177318 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" podStartSLOduration=5.552953287 podStartE2EDuration="26.177291724s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.47752798 +0000 UTC m=+986.952424058" lastFinishedPulling="2026-02-03 12:22:15.101866397 +0000 UTC m=+1007.576762495" observedRunningTime="2026-02-03 12:22:17.168267681 +0000 UTC m=+1009.643163779" watchObservedRunningTime="2026-02-03 12:22:17.177291724 +0000 UTC m=+1009.652187812" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.196909 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" podStartSLOduration=9.168754461 podStartE2EDuration="25.196882521s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.543830494 +0000 UTC m=+987.018726582" lastFinishedPulling="2026-02-03 12:22:10.571958554 +0000 UTC m=+1003.046854642" observedRunningTime="2026-02-03 12:22:17.194648283 +0000 UTC m=+1009.669544361" watchObservedRunningTime="2026-02-03 12:22:17.196882521 +0000 UTC m=+1009.671778609" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.225321 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" podStartSLOduration=8.558178536 podStartE2EDuration="25.225298545s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.44578557 +0000 UTC m=+986.920681658" lastFinishedPulling="2026-02-03 12:22:11.112905579 +0000 UTC m=+1003.587801667" observedRunningTime="2026-02-03 12:22:17.219098765 +0000 UTC m=+1009.693994853" watchObservedRunningTime="2026-02-03 12:22:17.225298545 +0000 UTC m=+1009.700194623" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.256897 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" podStartSLOduration=9.486613736 podStartE2EDuration="26.256873002s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.340633531 +0000 UTC m=+986.815529619" lastFinishedPulling="2026-02-03 12:22:11.110892797 +0000 UTC m=+1003.585788885" observedRunningTime="2026-02-03 12:22:17.256298987 +0000 UTC m=+1009.731195075" watchObservedRunningTime="2026-02-03 12:22:17.256873002 +0000 UTC m=+1009.731769090" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.294236 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" podStartSLOduration=9.821688688 podStartE2EDuration="26.294209017s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.093240125 +0000 UTC m=+986.568136203" lastFinishedPulling="2026-02-03 12:22:10.565760434 +0000 UTC m=+1003.040656532" observedRunningTime="2026-02-03 12:22:17.29123883 +0000 UTC m=+1009.766134918" watchObservedRunningTime="2026-02-03 12:22:17.294209017 +0000 UTC m=+1009.769105115" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.374257 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" podStartSLOduration=5.795771985 podStartE2EDuration="26.374227856s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.539385019 +0000 UTC m=+987.014281107" lastFinishedPulling="2026-02-03 12:22:15.11784088 +0000 UTC m=+1007.592736978" observedRunningTime="2026-02-03 12:22:17.336593613 +0000 UTC m=+1009.811489701" watchObservedRunningTime="2026-02-03 12:22:17.374227856 +0000 UTC m=+1009.849123944" Feb 03 12:22:17 crc kubenswrapper[4679]: I0203 12:22:17.380760 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" podStartSLOduration=9.277295644 podStartE2EDuration="26.380730054s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:53.468473973 +0000 UTC m=+985.943370061" lastFinishedPulling="2026-02-03 12:22:10.571908383 +0000 UTC m=+1003.046804471" observedRunningTime="2026-02-03 12:22:17.366984348 +0000 UTC m=+1009.841880436" watchObservedRunningTime="2026-02-03 12:22:17.380730054 +0000 UTC m=+1009.855626152" Feb 03 12:22:20 crc kubenswrapper[4679]: I0203 12:22:20.825276 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" event={"ID":"2de6e912-5456-4209-85d7-2bddcedc0384","Type":"ContainerStarted","Data":"b5164ff99a82630e2ed92fd9285c464e23ec56001e6b90afe84da15a4b531d80"} Feb 03 12:22:20 crc kubenswrapper[4679]: I0203 12:22:20.825853 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:22:20 crc kubenswrapper[4679]: I0203 12:22:20.845527 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" podStartSLOduration=24.023771548 podStartE2EDuration="29.845499289s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:22:14.721479353 +0000 UTC m=+1007.196375441" lastFinishedPulling="2026-02-03 12:22:20.543207094 +0000 UTC m=+1013.018103182" observedRunningTime="2026-02-03 12:22:20.843736394 +0000 UTC m=+1013.318632482" watchObservedRunningTime="2026-02-03 12:22:20.845499289 +0000 UTC m=+1013.320395387" Feb 03 12:22:21 crc kubenswrapper[4679]: I0203 12:22:21.845632 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-x9kws" Feb 03 12:22:21 crc kubenswrapper[4679]: I0203 12:22:21.892379 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-vgbcs" Feb 03 12:22:21 crc kubenswrapper[4679]: I0203 12:22:21.941707 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7p976" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.021178 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-pwgd6" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.069557 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-6l9l6" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.082343 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-xhb56" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.137194 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-h77pz" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.224371 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-m6jbm" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.256157 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-8gc44" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.311105 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nvx58" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.313046 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-4pvxk" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.396004 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-42stf" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.535252 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-46lw2" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.550850 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-vxkt7" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.841758 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" event={"ID":"9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4","Type":"ContainerStarted","Data":"a2092c649c43647038de9f2fa61ce7a5af2ee0671425b8af4ed4fccac2af0264"} Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.841975 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.863532 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" podStartSLOduration=4.294628586 podStartE2EDuration="31.863504541s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.399103343 +0000 UTC m=+986.873999431" lastFinishedPulling="2026-02-03 12:22:21.967979298 +0000 UTC m=+1014.442875386" observedRunningTime="2026-02-03 12:22:22.856409928 +0000 UTC m=+1015.331306016" watchObservedRunningTime="2026-02-03 12:22:22.863504541 +0000 UTC m=+1015.338400639" Feb 03 12:22:22 crc kubenswrapper[4679]: I0203 12:22:22.978535 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-wktnn" Feb 03 12:22:23 crc kubenswrapper[4679]: I0203 12:22:23.039028 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-89mxg" Feb 03 12:22:23 crc kubenswrapper[4679]: I0203 12:22:23.078083 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-w6xc9" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.074769 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.082648 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/11b2dd9f-a9fc-427c-a2a2-744484f359b4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x\" (UID: \"11b2dd9f-a9fc-427c-a2a2-744484f359b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.129940 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.565851 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x"] Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.858700 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" event={"ID":"11b2dd9f-a9fc-427c-a2a2-744484f359b4","Type":"ContainerStarted","Data":"0d495e9b0793f9d3c94d0ab8245a10245a0644cd4f69c0191d8e7a8183dc8114"} Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.888511 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.895759 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ab40b-9540-48ad-b1c7-7b5b1603e4d2-metrics-certs\") pod \"openstack-operator-controller-manager-599dbc9849-9t5wf\" (UID: \"632ab40b-9540-48ad-b1c7-7b5b1603e4d2\") " pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:24 crc kubenswrapper[4679]: I0203 12:22:24.989696 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:25 crc kubenswrapper[4679]: I0203 12:22:25.424772 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf"] Feb 03 12:22:25 crc kubenswrapper[4679]: I0203 12:22:25.867679 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" event={"ID":"632ab40b-9540-48ad-b1c7-7b5b1603e4d2","Type":"ContainerStarted","Data":"a04d8e3491c3112e1b1a07053dbb6d80335f546705ca180279b578133c584fa7"} Feb 03 12:22:27 crc kubenswrapper[4679]: I0203 12:22:27.874811 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vgg4d" Feb 03 12:22:30 crc kubenswrapper[4679]: I0203 12:22:30.906596 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" event={"ID":"632ab40b-9540-48ad-b1c7-7b5b1603e4d2","Type":"ContainerStarted","Data":"7655482550e87106bb99c4e65d0d49a678b032785a21856a8dee89b3f4117484"} Feb 03 12:22:31 crc kubenswrapper[4679]: I0203 12:22:31.901039 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-k5gcz" Feb 03 12:22:31 crc kubenswrapper[4679]: I0203 12:22:31.914293 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:33 crc kubenswrapper[4679]: I0203 12:22:33.936904 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" event={"ID":"79b06c14-7e75-4306-8001-3217809de327","Type":"ContainerStarted","Data":"ca4e8c75a759294368bce36c4b92775b2378037a03500b253a7808e3544bc83c"} Feb 03 12:22:33 crc kubenswrapper[4679]: I0203 12:22:33.938634 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:22:33 crc kubenswrapper[4679]: I0203 12:22:33.958184 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" podStartSLOduration=41.958159434 podStartE2EDuration="41.958159434s" podCreationTimestamp="2026-02-03 12:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:22:31.952123162 +0000 UTC m=+1024.427019290" watchObservedRunningTime="2026-02-03 12:22:33.958159434 +0000 UTC m=+1026.433055522" Feb 03 12:22:33 crc kubenswrapper[4679]: I0203 12:22:33.960562 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" podStartSLOduration=4.453459331 podStartE2EDuration="42.960554756s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:21:54.39863017 +0000 UTC m=+986.873526258" lastFinishedPulling="2026-02-03 12:22:32.905725595 +0000 UTC m=+1025.380621683" observedRunningTime="2026-02-03 12:22:33.957656651 +0000 UTC m=+1026.432552759" watchObservedRunningTime="2026-02-03 12:22:33.960554756 +0000 UTC m=+1026.435450844" Feb 03 12:22:34 crc kubenswrapper[4679]: I0203 12:22:34.944852 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" event={"ID":"11b2dd9f-a9fc-427c-a2a2-744484f359b4","Type":"ContainerStarted","Data":"4debc16aee379953700b7863be581b63e37a03d1e75bcacd331d874342d48013"} Feb 03 12:22:34 crc kubenswrapper[4679]: I0203 12:22:34.983838 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" podStartSLOduration=34.449671193 podStartE2EDuration="43.983807331s" podCreationTimestamp="2026-02-03 12:21:51 +0000 UTC" firstStartedPulling="2026-02-03 12:22:24.57616801 +0000 UTC m=+1017.051064098" lastFinishedPulling="2026-02-03 12:22:34.110304148 +0000 UTC m=+1026.585200236" observedRunningTime="2026-02-03 12:22:34.977424076 +0000 UTC m=+1027.452320184" watchObservedRunningTime="2026-02-03 12:22:34.983807331 +0000 UTC m=+1027.458703419" Feb 03 12:22:35 crc kubenswrapper[4679]: I0203 12:22:35.953246 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.735162 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.735231 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.735286 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.736036 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.736104 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6" gracePeriod=600 Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.963231 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6" exitCode=0 Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.963311 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6"} Feb 03 12:22:36 crc kubenswrapper[4679]: I0203 12:22:36.963785 4679 scope.go:117] "RemoveContainer" containerID="4f8303cada887334f02fe5707aa2a3b67415ca179b5f84b540c5e9544432dd4c" Feb 03 12:22:37 crc kubenswrapper[4679]: I0203 12:22:37.974192 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95"} Feb 03 12:22:42 crc kubenswrapper[4679]: I0203 12:22:42.311985 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bllmz" Feb 03 12:22:44 crc kubenswrapper[4679]: I0203 12:22:44.138975 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x" Feb 03 12:22:44 crc kubenswrapper[4679]: I0203 12:22:44.997588 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-599dbc9849-9t5wf" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.471552 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.474124 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.478220 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.478581 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9p46z" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.478737 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.478771 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.482052 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.571081 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.576077 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.579338 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.589245 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.646722 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.646787 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jvvl\" (UniqueName: \"kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.748088 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.748166 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztz52\" (UniqueName: \"kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.748267 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jvvl\" (UniqueName: \"kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.748303 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.748445 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.749503 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.771807 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jvvl\" (UniqueName: \"kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl\") pod \"dnsmasq-dns-675f4bcbfc-25xbh\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.830809 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.850137 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.850209 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztz52\" (UniqueName: \"kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.850246 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.851419 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.851537 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.875020 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztz52\" (UniqueName: \"kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52\") pod \"dnsmasq-dns-78dd6ddcc-7tn2f\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:22:59 crc kubenswrapper[4679]: I0203 12:22:59.892388 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:23:00 crc kubenswrapper[4679]: I0203 12:23:00.304417 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:23:00 crc kubenswrapper[4679]: I0203 12:23:00.312172 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:23:00 crc kubenswrapper[4679]: I0203 12:23:00.406017 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:23:00 crc kubenswrapper[4679]: W0203 12:23:00.410411 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d8d813_ce1b_40b8_9efb_9f1222ccdccd.slice/crio-a67356758d08f40ce5a989e6c9231a5a079975309d4140db1880e93d96771373 WatchSource:0}: Error finding container a67356758d08f40ce5a989e6c9231a5a079975309d4140db1880e93d96771373: Status 404 returned error can't find the container with id a67356758d08f40ce5a989e6c9231a5a079975309d4140db1880e93d96771373 Feb 03 12:23:01 crc kubenswrapper[4679]: I0203 12:23:01.158563 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" event={"ID":"cb90e61e-cc15-4860-adc8-1d6adc3e065a","Type":"ContainerStarted","Data":"76e9ed0f1cd8f3a9cf9e0c7d14be8343eb3d4a328a8013509b6d7aa254044af8"} Feb 03 12:23:01 crc kubenswrapper[4679]: I0203 12:23:01.164205 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" event={"ID":"78d8d813-ce1b-40b8-9efb-9f1222ccdccd","Type":"ContainerStarted","Data":"a67356758d08f40ce5a989e6c9231a5a079975309d4140db1880e93d96771373"} Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.331115 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.366284 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.367800 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.381498 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.498714 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.498778 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.498806 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shb64\" (UniqueName: \"kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.600715 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.600802 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.600839 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shb64\" (UniqueName: \"kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.602394 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.603057 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.635371 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shb64\" (UniqueName: \"kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64\") pod \"dnsmasq-dns-666b6646f7-ntpjg\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.699235 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.816926 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.893604 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.898893 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:02 crc kubenswrapper[4679]: I0203 12:23:02.940619 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.012329 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7cs\" (UniqueName: \"kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.012408 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.012466 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.116636 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7cs\" (UniqueName: \"kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.116735 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.116817 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.118127 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.118174 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.146792 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7cs\" (UniqueName: \"kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs\") pod \"dnsmasq-dns-57d769cc4f-6glvf\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.274467 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.388680 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:03 crc kubenswrapper[4679]: W0203 12:23:03.399872 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef753760_d109_4876_b6ee_acf5cd520fd5.slice/crio-7794d0c3e96b939a1b25e66ecd1d8333e3317cf9976c04b8cc38801cc9e1fcaa WatchSource:0}: Error finding container 7794d0c3e96b939a1b25e66ecd1d8333e3317cf9976c04b8cc38801cc9e1fcaa: Status 404 returned error can't find the container with id 7794d0c3e96b939a1b25e66ecd1d8333e3317cf9976c04b8cc38801cc9e1fcaa Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.549380 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.550643 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.554469 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.554685 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xf7k9" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.554818 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.554937 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.555052 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.555880 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.556049 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.578584 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.625824 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.625883 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.625904 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.625948 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.625974 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626025 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626057 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626078 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626103 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626121 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.626150 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xmw\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727350 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727798 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727835 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727853 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727885 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42xmw\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727915 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727936 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.727964 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.728005 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.728022 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.728074 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.728485 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.728645 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.729136 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.730019 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.730708 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.731581 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.736791 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.738494 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.738538 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.745753 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.747165 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xmw\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.759594 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " pod="openstack/rabbitmq-server-0" Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.801753 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:03 crc kubenswrapper[4679]: W0203 12:23:03.812613 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod586e393a_7260_41c1_8307_5ec717cd5275.slice/crio-1ea656711df5f8587427d055cc6788b68c3a557c895dafc9a4004863be3ad909 WatchSource:0}: Error finding container 1ea656711df5f8587427d055cc6788b68c3a557c895dafc9a4004863be3ad909: Status 404 returned error can't find the container with id 1ea656711df5f8587427d055cc6788b68c3a557c895dafc9a4004863be3ad909 Feb 03 12:23:03 crc kubenswrapper[4679]: I0203 12:23:03.881755 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.006771 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.009073 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.012932 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.012995 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.013134 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.013441 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.013471 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.013584 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.013847 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-crtc8" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.030037 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133617 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133669 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133709 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133794 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133834 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133857 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133881 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133922 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.133958 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.134491 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.134652 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.199838 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" event={"ID":"586e393a-7260-41c1-8307-5ec717cd5275","Type":"ContainerStarted","Data":"1ea656711df5f8587427d055cc6788b68c3a557c895dafc9a4004863be3ad909"} Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.205022 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" event={"ID":"ef753760-d109-4876-b6ee-acf5cd520fd5","Type":"ContainerStarted","Data":"7794d0c3e96b939a1b25e66ecd1d8333e3317cf9976c04b8cc38801cc9e1fcaa"} Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236769 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236830 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236855 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236881 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236929 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236959 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236977 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.236996 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.237024 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.237045 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.237081 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.237640 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.238563 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.239645 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.240353 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.240692 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.241394 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.246788 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.247012 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.247837 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.255292 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.264748 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.272894 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.352639 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.493746 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.950483 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.968595 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.969935 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.973118 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f58cx" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.973234 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.973516 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.975005 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.982414 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 12:23:04 crc kubenswrapper[4679]: I0203 12:23:04.987772 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.051108 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.051188 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.051219 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.065733 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.065872 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.065946 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.065982 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xh9f\" (UniqueName: \"kubernetes.io/projected/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kube-api-access-9xh9f\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.066121 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168633 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168737 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168778 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168809 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168833 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168875 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168907 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.168933 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xh9f\" (UniqueName: \"kubernetes.io/projected/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kube-api-access-9xh9f\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.170097 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.170157 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.170344 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-config-data-default\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.170726 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kolla-config\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.173103 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.173136 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.176707 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.191821 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xh9f\" (UniqueName: \"kubernetes.io/projected/e8a14eb9-fdf3-44dc-b8a8-0494fd209dea-kube-api-access-9xh9f\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.217924 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea\") " pod="openstack/openstack-galera-0" Feb 03 12:23:05 crc kubenswrapper[4679]: I0203 12:23:05.356918 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.179261 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.181326 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.189965 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.197753 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.198063 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.198239 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7tg5c" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.208083 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.297928 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468c4\" (UniqueName: \"kubernetes.io/projected/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kube-api-access-468c4\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298001 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298045 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298074 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298128 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298174 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298209 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.298232 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399650 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399749 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399801 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399896 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468c4\" (UniqueName: \"kubernetes.io/projected/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kube-api-access-468c4\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399920 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399966 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.399986 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.400030 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.401027 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.401831 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.402510 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.405390 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.406177 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.416102 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.444177 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.446859 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468c4\" (UniqueName: \"kubernetes.io/projected/b788c2a3-0e8f-4a4a-b121-f4c021b4932c-kube-api-access-468c4\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.485522 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b788c2a3-0e8f-4a4a-b121-f4c021b4932c\") " pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.542952 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.803677 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.804821 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.807321 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.807806 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ssknc" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.808010 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.829886 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.923663 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-config-data\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.923735 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wm2\" (UniqueName: \"kubernetes.io/projected/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kube-api-access-q2wm2\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.923782 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.923851 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:06 crc kubenswrapper[4679]: I0203 12:23:06.923896 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kolla-config\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.025957 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kolla-config\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.026976 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kolla-config\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.027710 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-config-data\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.027161 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-config-data\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.027783 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wm2\" (UniqueName: \"kubernetes.io/projected/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kube-api-access-q2wm2\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.028152 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.028242 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.035027 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.035053 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.052214 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wm2\" (UniqueName: \"kubernetes.io/projected/9e899cda-42d0-40ae-a9c6-34f4bbad9fe7-kube-api-access-q2wm2\") pod \"memcached-0\" (UID: \"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7\") " pod="openstack/memcached-0" Feb 03 12:23:07 crc kubenswrapper[4679]: I0203 12:23:07.145234 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 12:23:08 crc kubenswrapper[4679]: I0203 12:23:08.759520 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:23:08 crc kubenswrapper[4679]: I0203 12:23:08.760988 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:23:08 crc kubenswrapper[4679]: I0203 12:23:08.772166 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mplh4" Feb 03 12:23:08 crc kubenswrapper[4679]: I0203 12:23:08.819719 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:23:08 crc kubenswrapper[4679]: I0203 12:23:08.909576 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7hq\" (UniqueName: \"kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq\") pod \"kube-state-metrics-0\" (UID: \"cfc55122-b95a-43ed-bec8-9262c84e0fa5\") " pod="openstack/kube-state-metrics-0" Feb 03 12:23:09 crc kubenswrapper[4679]: I0203 12:23:09.011406 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7hq\" (UniqueName: \"kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq\") pod \"kube-state-metrics-0\" (UID: \"cfc55122-b95a-43ed-bec8-9262c84e0fa5\") " pod="openstack/kube-state-metrics-0" Feb 03 12:23:09 crc kubenswrapper[4679]: I0203 12:23:09.042674 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7hq\" (UniqueName: \"kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq\") pod \"kube-state-metrics-0\" (UID: \"cfc55122-b95a-43ed-bec8-9262c84e0fa5\") " pod="openstack/kube-state-metrics-0" Feb 03 12:23:09 crc kubenswrapper[4679]: I0203 12:23:09.121764 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:23:10 crc kubenswrapper[4679]: W0203 12:23:10.895935 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod438272b0_d957_44f7_aa5e_502ce5189f9c.slice/crio-6e08d3d75d65957fde56633b93a5ce5959fba8d07080b88ecf92dbf47bef1654 WatchSource:0}: Error finding container 6e08d3d75d65957fde56633b93a5ce5959fba8d07080b88ecf92dbf47bef1654: Status 404 returned error can't find the container with id 6e08d3d75d65957fde56633b93a5ce5959fba8d07080b88ecf92dbf47bef1654 Feb 03 12:23:11 crc kubenswrapper[4679]: I0203 12:23:11.314205 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerStarted","Data":"0a6a83f46807df55ee01d13620f8aa3126b6937bdbaa9b773f33e2b28da1bf39"} Feb 03 12:23:11 crc kubenswrapper[4679]: I0203 12:23:11.316886 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerStarted","Data":"6e08d3d75d65957fde56633b93a5ce5959fba8d07080b88ecf92dbf47bef1654"} Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.292896 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5tt4c"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.294317 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.301123 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.301445 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-r2stg" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.301606 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.328472 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tt4c"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.390382 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9zrqh"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.392740 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.400671 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9zrqh"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401249 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28tln\" (UniqueName: \"kubernetes.io/projected/c908c598-a229-467c-8430-de77205f95ec-kube-api-access-28tln\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401319 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-ovn-controller-tls-certs\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401350 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-combined-ca-bundle\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401404 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401478 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c908c598-a229-467c-8430-de77205f95ec-scripts\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401541 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-log-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.401576 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503316 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mfr\" (UniqueName: \"kubernetes.io/projected/4d10dd12-5213-414c-bd2b-76396833ad19-kube-api-access-z6mfr\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503447 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-run\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503478 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-log\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503507 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d10dd12-5213-414c-bd2b-76396833ad19-scripts\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503538 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-log-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503565 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503611 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-etc-ovs\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503641 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-lib\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503667 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28tln\" (UniqueName: \"kubernetes.io/projected/c908c598-a229-467c-8430-de77205f95ec-kube-api-access-28tln\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503801 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-ovn-controller-tls-certs\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503859 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-combined-ca-bundle\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.503889 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.504043 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c908c598-a229-467c-8430-de77205f95ec-scripts\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.504783 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.504894 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-run\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.507238 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c908c598-a229-467c-8430-de77205f95ec-scripts\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.508571 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c908c598-a229-467c-8430-de77205f95ec-var-log-ovn\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.513423 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-ovn-controller-tls-certs\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.513552 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908c598-a229-467c-8430-de77205f95ec-combined-ca-bundle\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.525835 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28tln\" (UniqueName: \"kubernetes.io/projected/c908c598-a229-467c-8430-de77205f95ec-kube-api-access-28tln\") pod \"ovn-controller-5tt4c\" (UID: \"c908c598-a229-467c-8430-de77205f95ec\") " pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.605464 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6mfr\" (UniqueName: \"kubernetes.io/projected/4d10dd12-5213-414c-bd2b-76396833ad19-kube-api-access-z6mfr\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.606012 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-run\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.606142 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-run\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.606200 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-log\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.606389 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-log\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.606455 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d10dd12-5213-414c-bd2b-76396833ad19-scripts\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.608607 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d10dd12-5213-414c-bd2b-76396833ad19-scripts\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.614653 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-etc-ovs\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.614744 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-lib\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.615083 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-var-lib\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.615114 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.615311 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4d10dd12-5213-414c-bd2b-76396833ad19-etc-ovs\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.616696 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.621001 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5pzbc" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.625869 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.626724 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.626868 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.626913 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.627844 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6mfr\" (UniqueName: \"kubernetes.io/projected/4d10dd12-5213-414c-bd2b-76396833ad19-kube-api-access-z6mfr\") pod \"ovn-controller-ovs-9zrqh\" (UID: \"4d10dd12-5213-414c-bd2b-76396833ad19\") " pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.630488 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.638451 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.719867 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721299 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721382 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721420 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721486 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721635 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-config\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721665 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721713 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tv98\" (UniqueName: \"kubernetes.io/projected/bf6e3dac-ec8b-422b-9459-3554f884594d-kube-api-access-4tv98\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.721738 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824134 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824212 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824245 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824279 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824440 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-config\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824474 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824516 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tv98\" (UniqueName: \"kubernetes.io/projected/bf6e3dac-ec8b-422b-9459-3554f884594d-kube-api-access-4tv98\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824549 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.824716 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.825615 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.826117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6e3dac-ec8b-422b-9459-3554f884594d-config\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.826462 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.830608 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.831066 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.849572 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf6e3dac-ec8b-422b-9459-3554f884594d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.857130 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tv98\" (UniqueName: \"kubernetes.io/projected/bf6e3dac-ec8b-422b-9459-3554f884594d-kube-api-access-4tv98\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.868607 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bf6e3dac-ec8b-422b-9459-3554f884594d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:12 crc kubenswrapper[4679]: I0203 12:23:12.969769 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.378925 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.381559 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.383704 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6rxkb" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.385705 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.389507 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.401618 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.431263 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.509662 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.509942 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdftz\" (UniqueName: \"kubernetes.io/projected/ea35f3b6-94df-45c5-9b94-af55636b7ad0-kube-api-access-hdftz\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510090 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510179 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510508 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510621 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510750 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.510825 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612339 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612415 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdftz\" (UniqueName: \"kubernetes.io/projected/ea35f3b6-94df-45c5-9b94-af55636b7ad0-kube-api-access-hdftz\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612468 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612495 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612528 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612545 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612571 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.612607 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.613188 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.614191 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.615259 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-config\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.615817 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea35f3b6-94df-45c5-9b94-af55636b7ad0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.622235 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.628818 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.632787 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdftz\" (UniqueName: \"kubernetes.io/projected/ea35f3b6-94df-45c5-9b94-af55636b7ad0-kube-api-access-hdftz\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.634569 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea35f3b6-94df-45c5-9b94-af55636b7ad0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.639214 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ea35f3b6-94df-45c5-9b94-af55636b7ad0\") " pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:15 crc kubenswrapper[4679]: I0203 12:23:15.735618 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.891574 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.892336 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shb64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ntpjg_openstack(ef753760-d109-4876-b6ee-acf5cd520fd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.893916 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.908677 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.908894 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lr7cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6glvf_openstack(586e393a-7260-41c1-8307-5ec717cd5275): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.910314 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" podUID="586e393a-7260-41c1-8307-5ec717cd5275" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.913881 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.914051 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jvvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-25xbh_openstack(cb90e61e-cc15-4860-adc8-1d6adc3e065a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.915262 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" podUID="cb90e61e-cc15-4860-adc8-1d6adc3e065a" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.936538 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.936834 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztz52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7tn2f_openstack(78d8d813-ce1b-40b8-9efb-9f1222ccdccd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:23:19 crc kubenswrapper[4679]: E0203 12:23:19.938042 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" podUID="78d8d813-ce1b-40b8-9efb-9f1222ccdccd" Feb 03 12:23:20 crc kubenswrapper[4679]: I0203 12:23:20.264653 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:23:20 crc kubenswrapper[4679]: E0203 12:23:20.429889 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" Feb 03 12:23:20 crc kubenswrapper[4679]: E0203 12:23:20.429958 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" podUID="586e393a-7260-41c1-8307-5ec717cd5275" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.212983 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.253693 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.269286 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc\") pod \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.269380 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztz52\" (UniqueName: \"kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52\") pod \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.269574 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config\") pod \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\" (UID: \"78d8d813-ce1b-40b8-9efb-9f1222ccdccd\") " Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.272546 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78d8d813-ce1b-40b8-9efb-9f1222ccdccd" (UID: "78d8d813-ce1b-40b8-9efb-9f1222ccdccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.275537 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config" (OuterVolumeSpecName: "config") pod "78d8d813-ce1b-40b8-9efb-9f1222ccdccd" (UID: "78d8d813-ce1b-40b8-9efb-9f1222ccdccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.284671 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52" (OuterVolumeSpecName: "kube-api-access-ztz52") pod "78d8d813-ce1b-40b8-9efb-9f1222ccdccd" (UID: "78d8d813-ce1b-40b8-9efb-9f1222ccdccd"). InnerVolumeSpecName "kube-api-access-ztz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.371450 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jvvl\" (UniqueName: \"kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl\") pod \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.371572 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config\") pod \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\" (UID: \"cb90e61e-cc15-4860-adc8-1d6adc3e065a\") " Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.372117 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztz52\" (UniqueName: \"kubernetes.io/projected/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-kube-api-access-ztz52\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.372136 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.372152 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78d8d813-ce1b-40b8-9efb-9f1222ccdccd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.372646 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config" (OuterVolumeSpecName: "config") pod "cb90e61e-cc15-4860-adc8-1d6adc3e065a" (UID: "cb90e61e-cc15-4860-adc8-1d6adc3e065a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.375656 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl" (OuterVolumeSpecName: "kube-api-access-9jvvl") pod "cb90e61e-cc15-4860-adc8-1d6adc3e065a" (UID: "cb90e61e-cc15-4860-adc8-1d6adc3e065a"). InnerVolumeSpecName "kube-api-access-9jvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.440238 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.440232 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-25xbh" event={"ID":"cb90e61e-cc15-4860-adc8-1d6adc3e065a","Type":"ContainerDied","Data":"76e9ed0f1cd8f3a9cf9e0c7d14be8343eb3d4a328a8013509b6d7aa254044af8"} Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.444442 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" event={"ID":"78d8d813-ce1b-40b8-9efb-9f1222ccdccd","Type":"ContainerDied","Data":"a67356758d08f40ce5a989e6c9231a5a079975309d4140db1880e93d96771373"} Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.444552 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7tn2f" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.457033 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc55122-b95a-43ed-bec8-9262c84e0fa5","Type":"ContainerStarted","Data":"9dd93e633e7b3509c40665378ca93dc79837dbcac652188227169c6b723b68bc"} Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.474035 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jvvl\" (UniqueName: \"kubernetes.io/projected/cb90e61e-cc15-4860-adc8-1d6adc3e065a-kube-api-access-9jvvl\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.474064 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb90e61e-cc15-4860-adc8-1d6adc3e065a-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.533974 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.545627 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:23:23 crc kubenswrapper[4679]: W0203 12:23:23.558855 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a14eb9_fdf3_44dc_b8a8_0494fd209dea.slice/crio-cd993c6d71836d29774b9bec336913354426b3a9992de430024fb1438ef394e0 WatchSource:0}: Error finding container cd993c6d71836d29774b9bec336913354426b3a9992de430024fb1438ef394e0: Status 404 returned error can't find the container with id cd993c6d71836d29774b9bec336913354426b3a9992de430024fb1438ef394e0 Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.559189 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-25xbh"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.572455 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.600755 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.609670 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7tn2f"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.644521 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.657654 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tt4c"] Feb 03 12:23:23 crc kubenswrapper[4679]: W0203 12:23:23.669445 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc908c598_a229_467c_8430_de77205f95ec.slice/crio-522a856a5e4920969a8b2adbf0f374b9517c988963bf687a932b3912d21cc23a WatchSource:0}: Error finding container 522a856a5e4920969a8b2adbf0f374b9517c988963bf687a932b3912d21cc23a: Status 404 returned error can't find the container with id 522a856a5e4920969a8b2adbf0f374b9517c988963bf687a932b3912d21cc23a Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.854493 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 12:23:23 crc kubenswrapper[4679]: I0203 12:23:23.958816 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9zrqh"] Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.222664 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d8d813-ce1b-40b8-9efb-9f1222ccdccd" path="/var/lib/kubelet/pods/78d8d813-ce1b-40b8-9efb-9f1222ccdccd/volumes" Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.223479 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb90e61e-cc15-4860-adc8-1d6adc3e065a" path="/var/lib/kubelet/pods/cb90e61e-cc15-4860-adc8-1d6adc3e065a/volumes" Feb 03 12:23:24 crc kubenswrapper[4679]: W0203 12:23:24.426427 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf6e3dac_ec8b_422b_9459_3554f884594d.slice/crio-d131726831653136a90ff9aadf0c6d0ca6a1ae13fbd661f198b9f06b8b47aef8 WatchSource:0}: Error finding container d131726831653136a90ff9aadf0c6d0ca6a1ae13fbd661f198b9f06b8b47aef8: Status 404 returned error can't find the container with id d131726831653136a90ff9aadf0c6d0ca6a1ae13fbd661f198b9f06b8b47aef8 Feb 03 12:23:24 crc kubenswrapper[4679]: W0203 12:23:24.431767 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d10dd12_5213_414c_bd2b_76396833ad19.slice/crio-0018ff6faca8af2af0d42b9133b29e1038527971511682190365dd3393ed8e84 WatchSource:0}: Error finding container 0018ff6faca8af2af0d42b9133b29e1038527971511682190365dd3393ed8e84: Status 404 returned error can't find the container with id 0018ff6faca8af2af0d42b9133b29e1038527971511682190365dd3393ed8e84 Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.476232 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bf6e3dac-ec8b-422b-9459-3554f884594d","Type":"ContainerStarted","Data":"d131726831653136a90ff9aadf0c6d0ca6a1ae13fbd661f198b9f06b8b47aef8"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.477436 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b788c2a3-0e8f-4a4a-b121-f4c021b4932c","Type":"ContainerStarted","Data":"2afc0154dc10dea2ffc7eb0ca91c6a7c028aaf7a6c80128f0889335e7a062ed8"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.478494 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c" event={"ID":"c908c598-a229-467c-8430-de77205f95ec","Type":"ContainerStarted","Data":"522a856a5e4920969a8b2adbf0f374b9517c988963bf687a932b3912d21cc23a"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.480986 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9zrqh" event={"ID":"4d10dd12-5213-414c-bd2b-76396833ad19","Type":"ContainerStarted","Data":"0018ff6faca8af2af0d42b9133b29e1038527971511682190365dd3393ed8e84"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.482668 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea","Type":"ContainerStarted","Data":"cd993c6d71836d29774b9bec336913354426b3a9992de430024fb1438ef394e0"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.484061 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7","Type":"ContainerStarted","Data":"88982a20ae24704e0d21e0f40bf7ba9bf8c45b83f7035f57d4d28d749e0eeb73"} Feb 03 12:23:24 crc kubenswrapper[4679]: I0203 12:23:24.638045 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 12:23:24 crc kubenswrapper[4679]: W0203 12:23:24.840837 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea35f3b6_94df_45c5_9b94_af55636b7ad0.slice/crio-e464c5798619eeb425e2f5545ddfc600a372e1801aa0388a545a4d03b7535346 WatchSource:0}: Error finding container e464c5798619eeb425e2f5545ddfc600a372e1801aa0388a545a4d03b7535346: Status 404 returned error can't find the container with id e464c5798619eeb425e2f5545ddfc600a372e1801aa0388a545a4d03b7535346 Feb 03 12:23:25 crc kubenswrapper[4679]: I0203 12:23:25.510847 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerStarted","Data":"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3"} Feb 03 12:23:25 crc kubenswrapper[4679]: I0203 12:23:25.520633 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerStarted","Data":"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934"} Feb 03 12:23:25 crc kubenswrapper[4679]: I0203 12:23:25.524586 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea35f3b6-94df-45c5-9b94-af55636b7ad0","Type":"ContainerStarted","Data":"e464c5798619eeb425e2f5545ddfc600a372e1801aa0388a545a4d03b7535346"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.575303 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b788c2a3-0e8f-4a4a-b121-f4c021b4932c","Type":"ContainerStarted","Data":"8d6c8b354b60eb9fae1d4fd6c9129834967caf39f616527a9c3abdb3e455a100"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.577857 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9zrqh" event={"ID":"4d10dd12-5213-414c-bd2b-76396833ad19","Type":"ContainerStarted","Data":"3bd6e4548e3ab769321e4e4a270edbe1f063c7e233715dabe4b6810ea78596a8"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.579633 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea","Type":"ContainerStarted","Data":"a72a95cc6b804a18b67d4e477c4ef99de22a167b736c4ae220be6d00c19f6959"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.582230 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bf6e3dac-ec8b-422b-9459-3554f884594d","Type":"ContainerStarted","Data":"663421931f3e58ecd031a093a2bdd299cb6702c36d7fa49ed0d1084b5f4902d8"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.584035 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c" event={"ID":"c908c598-a229-467c-8430-de77205f95ec","Type":"ContainerStarted","Data":"f2da27729204046862f66db74d4f5e98467339d2d75f0fb59eff2cc22a357a81"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.584206 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5tt4c" Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.585658 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea35f3b6-94df-45c5-9b94-af55636b7ad0","Type":"ContainerStarted","Data":"68892da865d4a4f798bda0dbab019f8008935ba7ad7bc1de3a9b4da74a5705d5"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.587664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc55122-b95a-43ed-bec8-9262c84e0fa5","Type":"ContainerStarted","Data":"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.587792 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.589716 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9e899cda-42d0-40ae-a9c6-34f4bbad9fe7","Type":"ContainerStarted","Data":"81b9ee8eecf11a1d505cf41c922bef308e1dd9b74d2dd30aca5a5e05160ccc0b"} Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.589867 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.656080 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.783473295 podStartE2EDuration="25.656043319s" podCreationTimestamp="2026-02-03 12:23:06 +0000 UTC" firstStartedPulling="2026-02-03 12:23:23.525603223 +0000 UTC m=+1076.000499311" lastFinishedPulling="2026-02-03 12:23:30.398173247 +0000 UTC m=+1082.873069335" observedRunningTime="2026-02-03 12:23:31.649587101 +0000 UTC m=+1084.124483219" watchObservedRunningTime="2026-02-03 12:23:31.656043319 +0000 UTC m=+1084.130939407" Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.726039 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5tt4c" podStartSLOduration=12.551207871 podStartE2EDuration="19.726012106s" podCreationTimestamp="2026-02-03 12:23:12 +0000 UTC" firstStartedPulling="2026-02-03 12:23:23.67485821 +0000 UTC m=+1076.149754298" lastFinishedPulling="2026-02-03 12:23:30.849662445 +0000 UTC m=+1083.324558533" observedRunningTime="2026-02-03 12:23:31.72118738 +0000 UTC m=+1084.196083468" watchObservedRunningTime="2026-02-03 12:23:31.726012106 +0000 UTC m=+1084.200908194" Feb 03 12:23:31 crc kubenswrapper[4679]: I0203 12:23:31.748016 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.960453166 podStartE2EDuration="23.74799611s" podCreationTimestamp="2026-02-03 12:23:08 +0000 UTC" firstStartedPulling="2026-02-03 12:23:23.061767572 +0000 UTC m=+1075.536663660" lastFinishedPulling="2026-02-03 12:23:30.849310506 +0000 UTC m=+1083.324206604" observedRunningTime="2026-02-03 12:23:31.736556651 +0000 UTC m=+1084.211452749" watchObservedRunningTime="2026-02-03 12:23:31.74799611 +0000 UTC m=+1084.222892198" Feb 03 12:23:32 crc kubenswrapper[4679]: I0203 12:23:32.604501 4679 generic.go:334] "Generic (PLEG): container finished" podID="4d10dd12-5213-414c-bd2b-76396833ad19" containerID="3bd6e4548e3ab769321e4e4a270edbe1f063c7e233715dabe4b6810ea78596a8" exitCode=0 Feb 03 12:23:32 crc kubenswrapper[4679]: I0203 12:23:32.606557 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9zrqh" event={"ID":"4d10dd12-5213-414c-bd2b-76396833ad19","Type":"ContainerDied","Data":"3bd6e4548e3ab769321e4e4a270edbe1f063c7e233715dabe4b6810ea78596a8"} Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.614420 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bf6e3dac-ec8b-422b-9459-3554f884594d","Type":"ContainerStarted","Data":"444162d19388643a0539051f4d7d2a9d560b713e86ffaa6a26357b4b52766af2"} Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.617624 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ea35f3b6-94df-45c5-9b94-af55636b7ad0","Type":"ContainerStarted","Data":"e352265df1f7c796515c00ed2ff9ba751d8c9fe3efdd1c9d840d72e81c81ec05"} Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.620612 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9zrqh" event={"ID":"4d10dd12-5213-414c-bd2b-76396833ad19","Type":"ContainerStarted","Data":"df6c316a59091187a7c2d1fb46588666e759e27a868bb3364896c2cdaad5fb04"} Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.620657 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9zrqh" event={"ID":"4d10dd12-5213-414c-bd2b-76396833ad19","Type":"ContainerStarted","Data":"c8eceb93690081f06899d65644d3448a2353341fb7b0b1fc14fcda8d995783db"} Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.621425 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.621464 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.638793 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.069333518 podStartE2EDuration="22.638768626s" podCreationTimestamp="2026-02-03 12:23:11 +0000 UTC" firstStartedPulling="2026-02-03 12:23:24.431116315 +0000 UTC m=+1076.906012403" lastFinishedPulling="2026-02-03 12:23:33.000551413 +0000 UTC m=+1085.475447511" observedRunningTime="2026-02-03 12:23:33.634121984 +0000 UTC m=+1086.109018092" watchObservedRunningTime="2026-02-03 12:23:33.638768626 +0000 UTC m=+1086.113664704" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.668557 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9zrqh" podStartSLOduration=15.302156795 podStartE2EDuration="21.668530933s" podCreationTimestamp="2026-02-03 12:23:12 +0000 UTC" firstStartedPulling="2026-02-03 12:23:24.437243065 +0000 UTC m=+1076.912139153" lastFinishedPulling="2026-02-03 12:23:30.803617203 +0000 UTC m=+1083.278513291" observedRunningTime="2026-02-03 12:23:33.661785716 +0000 UTC m=+1086.136681804" watchObservedRunningTime="2026-02-03 12:23:33.668530933 +0000 UTC m=+1086.143427021" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.684442 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.526607586 podStartE2EDuration="19.684416147s" podCreationTimestamp="2026-02-03 12:23:14 +0000 UTC" firstStartedPulling="2026-02-03 12:23:24.844321853 +0000 UTC m=+1077.319217941" lastFinishedPulling="2026-02-03 12:23:33.002130414 +0000 UTC m=+1085.477026502" observedRunningTime="2026-02-03 12:23:33.680200957 +0000 UTC m=+1086.155097055" watchObservedRunningTime="2026-02-03 12:23:33.684416147 +0000 UTC m=+1086.159312235" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.735998 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:33 crc kubenswrapper[4679]: I0203 12:23:33.970731 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.636631 4679 generic.go:334] "Generic (PLEG): container finished" podID="b788c2a3-0e8f-4a4a-b121-f4c021b4932c" containerID="8d6c8b354b60eb9fae1d4fd6c9129834967caf39f616527a9c3abdb3e455a100" exitCode=0 Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.636708 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b788c2a3-0e8f-4a4a-b121-f4c021b4932c","Type":"ContainerDied","Data":"8d6c8b354b60eb9fae1d4fd6c9129834967caf39f616527a9c3abdb3e455a100"} Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.640872 4679 generic.go:334] "Generic (PLEG): container finished" podID="e8a14eb9-fdf3-44dc-b8a8-0494fd209dea" containerID="a72a95cc6b804a18b67d4e477c4ef99de22a167b736c4ae220be6d00c19f6959" exitCode=0 Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.641405 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea","Type":"ContainerDied","Data":"a72a95cc6b804a18b67d4e477c4ef99de22a167b736c4ae220be6d00c19f6959"} Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.644199 4679 generic.go:334] "Generic (PLEG): container finished" podID="586e393a-7260-41c1-8307-5ec717cd5275" containerID="673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2" exitCode=0 Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.644260 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" event={"ID":"586e393a-7260-41c1-8307-5ec717cd5275","Type":"ContainerDied","Data":"673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2"} Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.645982 4679 generic.go:334] "Generic (PLEG): container finished" podID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerID="c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a" exitCode=0 Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.646094 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" event={"ID":"ef753760-d109-4876-b6ee-acf5cd520fd5","Type":"ContainerDied","Data":"c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a"} Feb 03 12:23:35 crc kubenswrapper[4679]: I0203 12:23:35.736604 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.655487 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b788c2a3-0e8f-4a4a-b121-f4c021b4932c","Type":"ContainerStarted","Data":"d211c53568884834128820453d76b079ba2ab4e7418f182848727f116e8b15c5"} Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.660262 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e8a14eb9-fdf3-44dc-b8a8-0494fd209dea","Type":"ContainerStarted","Data":"7f37a7308ac98603d1b4dd69879ac82cc8ad58183d91b3246e000bcee881bebb"} Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.662343 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" event={"ID":"586e393a-7260-41c1-8307-5ec717cd5275","Type":"ContainerStarted","Data":"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267"} Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.662578 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.665247 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" event={"ID":"ef753760-d109-4876-b6ee-acf5cd520fd5","Type":"ContainerStarted","Data":"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9"} Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.665834 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.677474 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.93632286 podStartE2EDuration="31.677454483s" podCreationTimestamp="2026-02-03 12:23:05 +0000 UTC" firstStartedPulling="2026-02-03 12:23:23.657123367 +0000 UTC m=+1076.132019455" lastFinishedPulling="2026-02-03 12:23:30.39825499 +0000 UTC m=+1082.873151078" observedRunningTime="2026-02-03 12:23:36.675475381 +0000 UTC m=+1089.150371489" watchObservedRunningTime="2026-02-03 12:23:36.677454483 +0000 UTC m=+1089.152350571" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.705698 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" podStartSLOduration=-9223372002.149097 podStartE2EDuration="34.705678339s" podCreationTimestamp="2026-02-03 12:23:02 +0000 UTC" firstStartedPulling="2026-02-03 12:23:03.816184384 +0000 UTC m=+1056.291080472" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:36.705390631 +0000 UTC m=+1089.180286719" watchObservedRunningTime="2026-02-03 12:23:36.705678339 +0000 UTC m=+1089.180574427" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.727154 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.439923609 podStartE2EDuration="33.727134999s" podCreationTimestamp="2026-02-03 12:23:03 +0000 UTC" firstStartedPulling="2026-02-03 12:23:23.562411814 +0000 UTC m=+1076.037307902" lastFinishedPulling="2026-02-03 12:23:30.849623204 +0000 UTC m=+1083.324519292" observedRunningTime="2026-02-03 12:23:36.725052175 +0000 UTC m=+1089.199948263" watchObservedRunningTime="2026-02-03 12:23:36.727134999 +0000 UTC m=+1089.202031087" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.753131 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" podStartSLOduration=3.185682229 podStartE2EDuration="34.753108587s" podCreationTimestamp="2026-02-03 12:23:02 +0000 UTC" firstStartedPulling="2026-02-03 12:23:03.40277889 +0000 UTC m=+1055.877674978" lastFinishedPulling="2026-02-03 12:23:34.970205248 +0000 UTC m=+1087.445101336" observedRunningTime="2026-02-03 12:23:36.745876318 +0000 UTC m=+1089.220772406" watchObservedRunningTime="2026-02-03 12:23:36.753108587 +0000 UTC m=+1089.228004675" Feb 03 12:23:36 crc kubenswrapper[4679]: I0203 12:23:36.778703 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.005916 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.006444 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.052831 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.147284 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.351644 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.386938 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.388626 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.400026 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.407423 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.421664 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kmmd2"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.422992 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.426137 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.449559 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kmmd2"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551543 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fecd77-d186-4510-9e06-4ff67edee154-config\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551598 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551669 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551722 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chspk\" (UniqueName: \"kubernetes.io/projected/54fecd77-d186-4510-9e06-4ff67edee154-kube-api-access-chspk\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551766 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovn-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551839 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-combined-ca-bundle\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551867 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovs-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.551915 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fdf\" (UniqueName: \"kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.552165 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.552218 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.653965 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fecd77-d186-4510-9e06-4ff67edee154-config\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654019 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654038 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654059 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chspk\" (UniqueName: \"kubernetes.io/projected/54fecd77-d186-4510-9e06-4ff67edee154-kube-api-access-chspk\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654085 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovn-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654123 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-combined-ca-bundle\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654140 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovs-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654171 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fdf\" (UniqueName: \"kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654216 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654235 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654536 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovn-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.654864 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fecd77-d186-4510-9e06-4ff67edee154-config\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.655295 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/54fecd77-d186-4510-9e06-4ff67edee154-ovs-rundir\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.655385 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.655400 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.656225 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.662931 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.663098 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fecd77-d186-4510-9e06-4ff67edee154-combined-ca-bundle\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.675460 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fdf\" (UniqueName: \"kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf\") pod \"dnsmasq-dns-7fd796d7df-8l2xq\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.688613 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chspk\" (UniqueName: \"kubernetes.io/projected/54fecd77-d186-4510-9e06-4ff67edee154-kube-api-access-chspk\") pod \"ovn-controller-metrics-kmmd2\" (UID: \"54fecd77-d186-4510-9e06-4ff67edee154\") " pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.708959 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.722666 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.737175 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kmmd2" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.757111 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.770251 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.772061 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.776415 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.787848 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.857295 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.857497 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52b4\" (UniqueName: \"kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.857538 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.857572 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.857607 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.959595 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52b4\" (UniqueName: \"kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.959962 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.960005 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.960052 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.960078 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.961560 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.962209 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.962720 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.962767 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:37 crc kubenswrapper[4679]: I0203 12:23:37.998622 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52b4\" (UniqueName: \"kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4\") pod \"dnsmasq-dns-86db49b7ff-4k4sm\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.043464 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.055664 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.055774 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.061683 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.061842 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.061910 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.061952 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rnh79" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163205 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163259 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163292 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163350 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skwn\" (UniqueName: \"kubernetes.io/projected/4275cf53-917f-4b88-9832-b3f9da33b445-kube-api-access-8skwn\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163404 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-config\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163427 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.163464 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-scripts\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.178837 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264713 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skwn\" (UniqueName: \"kubernetes.io/projected/4275cf53-917f-4b88-9832-b3f9da33b445-kube-api-access-8skwn\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264765 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-config\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264802 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264848 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-scripts\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264895 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264930 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.264966 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.270390 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-config\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.270818 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.270966 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.273583 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4275cf53-917f-4b88-9832-b3f9da33b445-scripts\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.280390 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.283289 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4275cf53-917f-4b88-9832-b3f9da33b445-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.301782 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skwn\" (UniqueName: \"kubernetes.io/projected/4275cf53-917f-4b88-9832-b3f9da33b445-kube-api-access-8skwn\") pod \"ovn-northd-0\" (UID: \"4275cf53-917f-4b88-9832-b3f9da33b445\") " pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.344492 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.378831 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.387394 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kmmd2"] Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.686718 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.695890 4679 generic.go:334] "Generic (PLEG): container finished" podID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerID="8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650" exitCode=0 Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.695965 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" event={"ID":"d9eb123d-3f0b-4621-a5f4-86215683cdee","Type":"ContainerDied","Data":"8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650"} Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.695992 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" event={"ID":"d9eb123d-3f0b-4621-a5f4-86215683cdee","Type":"ContainerStarted","Data":"5daad532ae426c6690ceb6636ba5a183170176ce6dec43f936302b12de6fc94f"} Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.699819 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="dnsmasq-dns" containerID="cri-o://caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267" gracePeriod=10 Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.700150 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kmmd2" event={"ID":"54fecd77-d186-4510-9e06-4ff67edee154","Type":"ContainerStarted","Data":"ccaef7cee5039a012d1f896d74d9b074bd9f456210b3cec847fccaba4a577f21"} Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.701263 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="dnsmasq-dns" containerID="cri-o://56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9" gracePeriod=10 Feb 03 12:23:38 crc kubenswrapper[4679]: I0203 12:23:38.796036 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.137005 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.208037 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.225466 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.228076 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.267352 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.278418 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.297316 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.297385 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.297445 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmz4\" (UniqueName: \"kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.297469 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.297510 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.313761 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.400782 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc\") pod \"ef753760-d109-4876-b6ee-acf5cd520fd5\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.400964 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr7cs\" (UniqueName: \"kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs\") pod \"586e393a-7260-41c1-8307-5ec717cd5275\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401076 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shb64\" (UniqueName: \"kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64\") pod \"ef753760-d109-4876-b6ee-acf5cd520fd5\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401114 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config\") pod \"586e393a-7260-41c1-8307-5ec717cd5275\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401141 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc\") pod \"586e393a-7260-41c1-8307-5ec717cd5275\" (UID: \"586e393a-7260-41c1-8307-5ec717cd5275\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401171 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config\") pod \"ef753760-d109-4876-b6ee-acf5cd520fd5\" (UID: \"ef753760-d109-4876-b6ee-acf5cd520fd5\") " Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401383 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnmz4\" (UniqueName: \"kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401404 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401457 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401575 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.401597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.402403 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.406655 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64" (OuterVolumeSpecName: "kube-api-access-shb64") pod "ef753760-d109-4876-b6ee-acf5cd520fd5" (UID: "ef753760-d109-4876-b6ee-acf5cd520fd5"). InnerVolumeSpecName "kube-api-access-shb64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.407133 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.407169 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.407929 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.410819 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs" (OuterVolumeSpecName: "kube-api-access-lr7cs") pod "586e393a-7260-41c1-8307-5ec717cd5275" (UID: "586e393a-7260-41c1-8307-5ec717cd5275"). InnerVolumeSpecName "kube-api-access-lr7cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.432350 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnmz4\" (UniqueName: \"kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4\") pod \"dnsmasq-dns-698758b865-djvr2\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.480065 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config" (OuterVolumeSpecName: "config") pod "586e393a-7260-41c1-8307-5ec717cd5275" (UID: "586e393a-7260-41c1-8307-5ec717cd5275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.482707 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef753760-d109-4876-b6ee-acf5cd520fd5" (UID: "ef753760-d109-4876-b6ee-acf5cd520fd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.497463 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "586e393a-7260-41c1-8307-5ec717cd5275" (UID: "586e393a-7260-41c1-8307-5ec717cd5275"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.498797 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config" (OuterVolumeSpecName: "config") pod "ef753760-d109-4876-b6ee-acf5cd520fd5" (UID: "ef753760-d109-4876-b6ee-acf5cd520fd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504349 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr7cs\" (UniqueName: \"kubernetes.io/projected/586e393a-7260-41c1-8307-5ec717cd5275-kube-api-access-lr7cs\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504392 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shb64\" (UniqueName: \"kubernetes.io/projected/ef753760-d109-4876-b6ee-acf5cd520fd5-kube-api-access-shb64\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504405 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504415 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/586e393a-7260-41c1-8307-5ec717cd5275-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504426 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.504435 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef753760-d109-4876-b6ee-acf5cd520fd5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.569065 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.723218 4679 generic.go:334] "Generic (PLEG): container finished" podID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerID="56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9" exitCode=0 Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.723544 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.723573 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" event={"ID":"ef753760-d109-4876-b6ee-acf5cd520fd5","Type":"ContainerDied","Data":"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.725271 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ntpjg" event={"ID":"ef753760-d109-4876-b6ee-acf5cd520fd5","Type":"ContainerDied","Data":"7794d0c3e96b939a1b25e66ecd1d8333e3317cf9976c04b8cc38801cc9e1fcaa"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.725306 4679 scope.go:117] "RemoveContainer" containerID="56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.738766 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kmmd2" event={"ID":"54fecd77-d186-4510-9e06-4ff67edee154","Type":"ContainerStarted","Data":"40899b2d1046c4844d5ee841d592aab2a02dc1e6f5e06610c6ff98d1ddfb052c"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.748244 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4275cf53-917f-4b88-9832-b3f9da33b445","Type":"ContainerStarted","Data":"504d16b3acf2f2316ca55380d6a2648b49fe415596b48eeec97f0e878ad84a7c"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.751569 4679 generic.go:334] "Generic (PLEG): container finished" podID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerID="ea879df1cd1ef440513a0daf1332e4e1e01ee83b703ffa3685b0f27372c21803" exitCode=0 Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.751642 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" event={"ID":"ced8c228-de1b-4af4-b503-94b1c05499a8","Type":"ContainerDied","Data":"ea879df1cd1ef440513a0daf1332e4e1e01ee83b703ffa3685b0f27372c21803"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.751668 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" event={"ID":"ced8c228-de1b-4af4-b503-94b1c05499a8","Type":"ContainerStarted","Data":"2d7792cb845607d5652795b4d9ac3abe793095cffa5a0d3d2227d17b1984d777"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.771325 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kmmd2" podStartSLOduration=2.771304089 podStartE2EDuration="2.771304089s" podCreationTimestamp="2026-02-03 12:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:39.764983154 +0000 UTC m=+1092.239879252" watchObservedRunningTime="2026-02-03 12:23:39.771304089 +0000 UTC m=+1092.246200177" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.771676 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="dnsmasq-dns" containerID="cri-o://0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842" gracePeriod=10 Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.771868 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" event={"ID":"d9eb123d-3f0b-4621-a5f4-86215683cdee","Type":"ContainerStarted","Data":"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.771914 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.788957 4679 scope.go:117] "RemoveContainer" containerID="c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.804549 4679 generic.go:334] "Generic (PLEG): container finished" podID="586e393a-7260-41c1-8307-5ec717cd5275" containerID="caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267" exitCode=0 Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.804640 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" event={"ID":"586e393a-7260-41c1-8307-5ec717cd5275","Type":"ContainerDied","Data":"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.804671 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" event={"ID":"586e393a-7260-41c1-8307-5ec717cd5275","Type":"ContainerDied","Data":"1ea656711df5f8587427d055cc6788b68c3a557c895dafc9a4004863be3ad909"} Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.804743 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6glvf" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.835836 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.842637 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ntpjg"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.961663 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" podStartSLOduration=2.961639129 podStartE2EDuration="2.961639129s" podCreationTimestamp="2026-02-03 12:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:39.892907254 +0000 UTC m=+1092.367803342" watchObservedRunningTime="2026-02-03 12:23:39.961639129 +0000 UTC m=+1092.436535217" Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.963882 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:39 crc kubenswrapper[4679]: I0203 12:23:39.979775 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6glvf"] Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.163785 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.224544 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586e393a-7260-41c1-8307-5ec717cd5275" path="/var/lib/kubelet/pods/586e393a-7260-41c1-8307-5ec717cd5275/volumes" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.226538 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" path="/var/lib/kubelet/pods/ef753760-d109-4876-b6ee-acf5cd520fd5/volumes" Feb 03 12:23:40 crc kubenswrapper[4679]: W0203 12:23:40.236548 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf98e14ea_27e5_471b_900d_39c0cc2d676f.slice/crio-b5d33bdfbab4ccba042a2601b5472c7ee82ac6b31f2b17d4694ee143456a52d3 WatchSource:0}: Error finding container b5d33bdfbab4ccba042a2601b5472c7ee82ac6b31f2b17d4694ee143456a52d3: Status 404 returned error can't find the container with id b5d33bdfbab4ccba042a2601b5472c7ee82ac6b31f2b17d4694ee143456a52d3 Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.329586 4679 scope.go:117] "RemoveContainer" containerID="56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.330609 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9\": container with ID starting with 56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9 not found: ID does not exist" containerID="56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.330643 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9"} err="failed to get container status \"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9\": rpc error: code = NotFound desc = could not find container \"56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9\": container with ID starting with 56809f97fa9823da3f5c2d71a5a585a7e244305caa3e5e98055a6dc1a1606fb9 not found: ID does not exist" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.330666 4679 scope.go:117] "RemoveContainer" containerID="c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.331001 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a\": container with ID starting with c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a not found: ID does not exist" containerID="c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.331049 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a"} err="failed to get container status \"c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a\": rpc error: code = NotFound desc = could not find container \"c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a\": container with ID starting with c0e576a679e1916ff6dd71a8ed8c6341f71be651d4bd546114ece52c1d83380a not found: ID does not exist" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.331081 4679 scope.go:117] "RemoveContainer" containerID="caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.361991 4679 scope.go:117] "RemoveContainer" containerID="673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.405075 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.405688 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="init" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.405706 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="init" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.405739 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="init" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.405746 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="init" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.405769 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.405775 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.405800 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.405806 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.406006 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="586e393a-7260-41c1-8307-5ec717cd5275" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.406019 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef753760-d109-4876-b6ee-acf5cd520fd5" containerName="dnsmasq-dns" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.411492 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.411684 4679 scope.go:117] "RemoveContainer" containerID="caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.415880 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.416085 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-695dh" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.416217 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.416384 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.431246 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267\": container with ID starting with caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267 not found: ID does not exist" containerID="caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.431296 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267"} err="failed to get container status \"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267\": rpc error: code = NotFound desc = could not find container \"caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267\": container with ID starting with caec334971df27a98130c73f8ac5e110a4aa5c4787e31ebf6636674fee3c5267 not found: ID does not exist" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.431326 4679 scope.go:117] "RemoveContainer" containerID="673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.437764 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.440085 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2\": container with ID starting with 673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2 not found: ID does not exist" containerID="673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.440125 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2"} err="failed to get container status \"673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2\": rpc error: code = NotFound desc = could not find container \"673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2\": container with ID starting with 673e7b7fd2781a1391d451ecb201f63a3cbff774622b3f1636421ad8702049a2 not found: ID does not exist" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541105 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9mx\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-kube-api-access-dk9mx\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541420 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-cache\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541466 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-lock\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541585 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17121344-4061-43d2-bf89-7a3684b88461-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541644 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.541694 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.554839 4679 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 03 12:23:40 crc kubenswrapper[4679]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/ced8c228-de1b-4af4-b503-94b1c05499a8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 03 12:23:40 crc kubenswrapper[4679]: > podSandboxID="2d7792cb845607d5652795b4d9ac3abe793095cffa5a0d3d2227d17b1984d777" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.555066 4679 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 03 12:23:40 crc kubenswrapper[4679]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l52b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-4k4sm_openstack(ced8c228-de1b-4af4-b503-94b1c05499a8): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/ced8c228-de1b-4af4-b503-94b1c05499a8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 03 12:23:40 crc kubenswrapper[4679]: > logger="UnhandledError" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.556516 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/ced8c228-de1b-4af4-b503-94b1c05499a8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.614656 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643079 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9mx\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-kube-api-access-dk9mx\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643133 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-cache\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643167 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-lock\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643241 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17121344-4061-43d2-bf89-7a3684b88461-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643282 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643326 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.643496 4679 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.643510 4679 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:23:40 crc kubenswrapper[4679]: E0203 12:23:40.643558 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift podName:17121344-4061-43d2-bf89-7a3684b88461 nodeName:}" failed. No retries permitted until 2026-02-03 12:23:41.143541892 +0000 UTC m=+1093.618437970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift") pod "swift-storage-0" (UID: "17121344-4061-43d2-bf89-7a3684b88461") : configmap "swift-ring-files" not found Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.643768 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-cache\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.644013 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/17121344-4061-43d2-bf89-7a3684b88461-lock\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.650995 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17121344-4061-43d2-bf89-7a3684b88461-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.652553 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.666194 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9mx\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-kube-api-access-dk9mx\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.732626 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.747001 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb\") pod \"d9eb123d-3f0b-4621-a5f4-86215683cdee\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.747049 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config\") pod \"d9eb123d-3f0b-4621-a5f4-86215683cdee\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.747093 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc\") pod \"d9eb123d-3f0b-4621-a5f4-86215683cdee\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.747123 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7fdf\" (UniqueName: \"kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf\") pod \"d9eb123d-3f0b-4621-a5f4-86215683cdee\" (UID: \"d9eb123d-3f0b-4621-a5f4-86215683cdee\") " Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.834035 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf" (OuterVolumeSpecName: "kube-api-access-c7fdf") pod "d9eb123d-3f0b-4621-a5f4-86215683cdee" (UID: "d9eb123d-3f0b-4621-a5f4-86215683cdee"). InnerVolumeSpecName "kube-api-access-c7fdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.843529 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-djvr2" event={"ID":"f98e14ea-27e5-471b-900d-39c0cc2d676f","Type":"ContainerStarted","Data":"b5d33bdfbab4ccba042a2601b5472c7ee82ac6b31f2b17d4694ee143456a52d3"} Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.848714 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7fdf\" (UniqueName: \"kubernetes.io/projected/d9eb123d-3f0b-4621-a5f4-86215683cdee-kube-api-access-c7fdf\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.879642 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4275cf53-917f-4b88-9832-b3f9da33b445","Type":"ContainerStarted","Data":"ddd6b70417994da7a3f2c6f79747d10346f644647ccc702665d495dc4e44f4e8"} Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.928936 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9eb123d-3f0b-4621-a5f4-86215683cdee" (UID: "d9eb123d-3f0b-4621-a5f4-86215683cdee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.931625 4679 generic.go:334] "Generic (PLEG): container finished" podID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerID="0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842" exitCode=0 Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.931687 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" event={"ID":"d9eb123d-3f0b-4621-a5f4-86215683cdee","Type":"ContainerDied","Data":"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842"} Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.931714 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" event={"ID":"d9eb123d-3f0b-4621-a5f4-86215683cdee","Type":"ContainerDied","Data":"5daad532ae426c6690ceb6636ba5a183170176ce6dec43f936302b12de6fc94f"} Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.931732 4679 scope.go:117] "RemoveContainer" containerID="0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.931827 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8l2xq" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.950055 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.956963 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config" (OuterVolumeSpecName: "config") pod "d9eb123d-3f0b-4621-a5f4-86215683cdee" (UID: "d9eb123d-3f0b-4621-a5f4-86215683cdee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.964685 4679 scope.go:117] "RemoveContainer" containerID="8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650" Feb 03 12:23:40 crc kubenswrapper[4679]: I0203 12:23:40.971743 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9eb123d-3f0b-4621-a5f4-86215683cdee" (UID: "d9eb123d-3f0b-4621-a5f4-86215683cdee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.012686 4679 scope.go:117] "RemoveContainer" containerID="0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842" Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.014861 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842\": container with ID starting with 0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842 not found: ID does not exist" containerID="0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.014909 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842"} err="failed to get container status \"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842\": rpc error: code = NotFound desc = could not find container \"0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842\": container with ID starting with 0cce28c07197f4e975be24d77fa170fdd2f5a55c60e43f5fa722e25651875842 not found: ID does not exist" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.014940 4679 scope.go:117] "RemoveContainer" containerID="8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650" Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.016940 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650\": container with ID starting with 8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650 not found: ID does not exist" containerID="8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.016966 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650"} err="failed to get container status \"8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650\": rpc error: code = NotFound desc = could not find container \"8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650\": container with ID starting with 8bea723fffbcb6507435579fef54784ace0b7c09439c28caf32a0a975cb6e650 not found: ID does not exist" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.027317 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7mtlb"] Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.027732 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="init" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.027748 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="init" Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.027777 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="dnsmasq-dns" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.027783 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="dnsmasq-dns" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.027934 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" containerName="dnsmasq-dns" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.028478 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.033346 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.033436 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.033753 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.043833 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7mtlb"] Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.052278 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.052317 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eb123d-3f0b-4621-a5f4-86215683cdee-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155741 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pg8\" (UniqueName: \"kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155846 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155891 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155920 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155946 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.155972 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.156060 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.156100 4679 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.156127 4679 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.156105 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: E0203 12:23:41.156181 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift podName:17121344-4061-43d2-bf89-7a3684b88461 nodeName:}" failed. No retries permitted until 2026-02-03 12:23:42.156163106 +0000 UTC m=+1094.631059184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift") pod "swift-storage-0" (UID: "17121344-4061-43d2-bf89-7a3684b88461") : configmap "swift-ring-files" not found Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257555 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257617 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257672 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7pg8\" (UniqueName: \"kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257750 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257773 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257796 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.257816 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.258444 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.258788 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.259029 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.262590 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.264746 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.265220 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.272513 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.279386 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8l2xq"] Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.283318 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7pg8\" (UniqueName: \"kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8\") pod \"swift-ring-rebalance-7mtlb\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.399592 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.871064 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7mtlb"] Feb 03 12:23:41 crc kubenswrapper[4679]: W0203 12:23:41.877630 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43821977_e5d9_4405_b6c6_d739a8fea389.slice/crio-ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729 WatchSource:0}: Error finding container ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729: Status 404 returned error can't find the container with id ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729 Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.958093 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4275cf53-917f-4b88-9832-b3f9da33b445","Type":"ContainerStarted","Data":"2f0515fa897b6abb909b94586028ad918e730540c38af6b2ab49a3e0b3a55864"} Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.958211 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.960434 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" event={"ID":"ced8c228-de1b-4af4-b503-94b1c05499a8","Type":"ContainerStarted","Data":"72da974a1427da770435f77b77e79741fe89d8d757129fad84deac6754624ac1"} Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.960647 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.962853 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7mtlb" event={"ID":"43821977-e5d9-4405-b6c6-d739a8fea389","Type":"ContainerStarted","Data":"ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729"} Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.964766 4679 generic.go:334] "Generic (PLEG): container finished" podID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerID="247583b6e500f4fb9c97f116eb3aa570779c3a44ddff5806b769142113f29c33" exitCode=0 Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.964805 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-djvr2" event={"ID":"f98e14ea-27e5-471b-900d-39c0cc2d676f","Type":"ContainerDied","Data":"247583b6e500f4fb9c97f116eb3aa570779c3a44ddff5806b769142113f29c33"} Feb 03 12:23:41 crc kubenswrapper[4679]: I0203 12:23:41.986857 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.408848016 podStartE2EDuration="4.986839634s" podCreationTimestamp="2026-02-03 12:23:37 +0000 UTC" firstStartedPulling="2026-02-03 12:23:38.785943153 +0000 UTC m=+1091.260839241" lastFinishedPulling="2026-02-03 12:23:40.363934771 +0000 UTC m=+1092.838830859" observedRunningTime="2026-02-03 12:23:41.980616422 +0000 UTC m=+1094.455512520" watchObservedRunningTime="2026-02-03 12:23:41.986839634 +0000 UTC m=+1094.461735722" Feb 03 12:23:42 crc kubenswrapper[4679]: I0203 12:23:42.046787 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" podStartSLOduration=5.046764659 podStartE2EDuration="5.046764659s" podCreationTimestamp="2026-02-03 12:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:42.044877669 +0000 UTC m=+1094.519773757" watchObservedRunningTime="2026-02-03 12:23:42.046764659 +0000 UTC m=+1094.521660747" Feb 03 12:23:42 crc kubenswrapper[4679]: I0203 12:23:42.180276 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:42 crc kubenswrapper[4679]: E0203 12:23:42.180491 4679 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:23:42 crc kubenswrapper[4679]: E0203 12:23:42.180688 4679 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:23:42 crc kubenswrapper[4679]: E0203 12:23:42.180743 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift podName:17121344-4061-43d2-bf89-7a3684b88461 nodeName:}" failed. No retries permitted until 2026-02-03 12:23:44.180727666 +0000 UTC m=+1096.655623744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift") pod "swift-storage-0" (UID: "17121344-4061-43d2-bf89-7a3684b88461") : configmap "swift-ring-files" not found Feb 03 12:23:42 crc kubenswrapper[4679]: I0203 12:23:42.221965 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9eb123d-3f0b-4621-a5f4-86215683cdee" path="/var/lib/kubelet/pods/d9eb123d-3f0b-4621-a5f4-86215683cdee/volumes" Feb 03 12:23:42 crc kubenswrapper[4679]: I0203 12:23:42.981682 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-djvr2" event={"ID":"f98e14ea-27e5-471b-900d-39c0cc2d676f","Type":"ContainerStarted","Data":"b03a134b9174ae8a341b7e9bf99664aed0a2f91f4473b4eab1406020385aa505"} Feb 03 12:23:43 crc kubenswrapper[4679]: I0203 12:23:43.010488 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-djvr2" podStartSLOduration=4.01046733 podStartE2EDuration="4.01046733s" podCreationTimestamp="2026-02-03 12:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:43.004896285 +0000 UTC m=+1095.479792393" watchObservedRunningTime="2026-02-03 12:23:43.01046733 +0000 UTC m=+1095.485363418" Feb 03 12:23:43 crc kubenswrapper[4679]: I0203 12:23:43.989147 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:44 crc kubenswrapper[4679]: I0203 12:23:44.222020 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:44 crc kubenswrapper[4679]: E0203 12:23:44.222163 4679 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:23:44 crc kubenswrapper[4679]: E0203 12:23:44.222192 4679 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:23:44 crc kubenswrapper[4679]: E0203 12:23:44.222251 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift podName:17121344-4061-43d2-bf89-7a3684b88461 nodeName:}" failed. No retries permitted until 2026-02-03 12:23:48.222235437 +0000 UTC m=+1100.697131525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift") pod "swift-storage-0" (UID: "17121344-4061-43d2-bf89-7a3684b88461") : configmap "swift-ring-files" not found Feb 03 12:23:45 crc kubenswrapper[4679]: I0203 12:23:45.358059 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 12:23:45 crc kubenswrapper[4679]: I0203 12:23:45.359273 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 12:23:45 crc kubenswrapper[4679]: I0203 12:23:45.438223 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.092909 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.544261 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.544389 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.651509 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.830582 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2dwgx"] Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.831636 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.849044 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2dwgx"] Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.877031 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.877413 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgg6\" (UniqueName: \"kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.926480 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-efd2-account-create-update-nn9tr"] Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.928079 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.931126 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.946991 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-efd2-account-create-update-nn9tr"] Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.978950 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgg6\" (UniqueName: \"kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.979024 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.979088 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7xr\" (UniqueName: \"kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.979116 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:46 crc kubenswrapper[4679]: I0203 12:23:46.980215 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.004151 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgg6\" (UniqueName: \"kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6\") pod \"keystone-db-create-2dwgx\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.030935 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7mtlb" event={"ID":"43821977-e5d9-4405-b6c6-d739a8fea389","Type":"ContainerStarted","Data":"7e69e45aafc1ec7b5975840b44e7edf37fc24582d9132fd7dcd2f93643239816"} Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.032940 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5s9wx"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.034057 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.050347 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5s9wx"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.080434 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7xr\" (UniqueName: \"kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.080496 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.081592 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.092295 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7mtlb" podStartSLOduration=1.868154685 podStartE2EDuration="6.092272001s" podCreationTimestamp="2026-02-03 12:23:41 +0000 UTC" firstStartedPulling="2026-02-03 12:23:41.87980304 +0000 UTC m=+1094.354699128" lastFinishedPulling="2026-02-03 12:23:46.103920356 +0000 UTC m=+1098.578816444" observedRunningTime="2026-02-03 12:23:47.07421482 +0000 UTC m=+1099.549110908" watchObservedRunningTime="2026-02-03 12:23:47.092272001 +0000 UTC m=+1099.567168089" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.123956 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7xr\" (UniqueName: \"kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr\") pod \"keystone-efd2-account-create-update-nn9tr\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.152950 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.158752 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ddd2-account-create-update-8vcg8"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.162472 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.165627 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.174005 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddd2-account-create-update-8vcg8"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.183346 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.183445 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8dl\" (UniqueName: \"kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.252134 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.287437 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8dl\" (UniqueName: \"kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.287785 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtnj\" (UniqueName: \"kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.287947 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.288021 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.288980 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.333517 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5pnsw"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.334878 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.337195 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8dl\" (UniqueName: \"kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl\") pod \"placement-db-create-5s9wx\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.353197 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5pnsw"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.369993 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.391278 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtnj\" (UniqueName: \"kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.391422 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.392399 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.416786 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtnj\" (UniqueName: \"kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj\") pod \"placement-ddd2-account-create-update-8vcg8\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.418011 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.467321 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0d49-account-create-update-dz92r"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.468617 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.474924 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.492985 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.493066 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtkt\" (UniqueName: \"kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.503794 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0d49-account-create-update-dz92r"] Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.569349 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.595000 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.595048 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mmw\" (UniqueName: \"kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.595142 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtkt\" (UniqueName: \"kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.595198 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.596113 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.615046 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtkt\" (UniqueName: \"kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt\") pod \"glance-db-create-5pnsw\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.697548 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.697678 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2mmw\" (UniqueName: \"kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.702596 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.712279 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.724538 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2mmw\" (UniqueName: \"kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw\") pod \"glance-0d49-account-create-update-dz92r\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.813862 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:47 crc kubenswrapper[4679]: I0203 12:23:47.826901 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2dwgx"] Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.041953 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2dwgx" event={"ID":"a2a1137b-9229-42b1-8764-7169cfc309f9","Type":"ContainerStarted","Data":"e3903ab89c5cb41471b09e7246de3c9560f65e3db2af0408ebfd547ffecec3a7"} Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.135270 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-efd2-account-create-update-nn9tr"] Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.182069 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.187924 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5s9wx"] Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.311998 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:48 crc kubenswrapper[4679]: E0203 12:23:48.314805 4679 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 12:23:48 crc kubenswrapper[4679]: E0203 12:23:48.314827 4679 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 12:23:48 crc kubenswrapper[4679]: E0203 12:23:48.314907 4679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift podName:17121344-4061-43d2-bf89-7a3684b88461 nodeName:}" failed. No retries permitted until 2026-02-03 12:23:56.314885842 +0000 UTC m=+1108.789781930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift") pod "swift-storage-0" (UID: "17121344-4061-43d2-bf89-7a3684b88461") : configmap "swift-ring-files" not found Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.328177 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddd2-account-create-update-8vcg8"] Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.414823 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5pnsw"] Feb 03 12:23:48 crc kubenswrapper[4679]: I0203 12:23:48.676471 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0d49-account-create-update-dz92r"] Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.049818 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d49-account-create-update-dz92r" event={"ID":"2e11319e-fc62-4971-a341-f8c39b7843cb","Type":"ContainerStarted","Data":"592671ead8dfdcd3f46fdef7d5cb7bcbbe8d1e00f4e0101b939950b81f560048"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.050200 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d49-account-create-update-dz92r" event={"ID":"2e11319e-fc62-4971-a341-f8c39b7843cb","Type":"ContainerStarted","Data":"a126c1dcaf0237284b5332dad3c96682e4a202692e65da5011bb2c88476ff715"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.052241 4679 generic.go:334] "Generic (PLEG): container finished" podID="0dac64e9-0810-4130-b75e-9711ce1ab490" containerID="1d7a2f570f3c87cff1edadccd6a628506dbe84ba079af8773ac342ef6fcab711" exitCode=0 Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.052351 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-nn9tr" event={"ID":"0dac64e9-0810-4130-b75e-9711ce1ab490","Type":"ContainerDied","Data":"1d7a2f570f3c87cff1edadccd6a628506dbe84ba079af8773ac342ef6fcab711"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.052400 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-nn9tr" event={"ID":"0dac64e9-0810-4130-b75e-9711ce1ab490","Type":"ContainerStarted","Data":"e37c6086cc360a9423ed89f116183ef846998fbb9549708c3f8c6a4711ef7f45"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.054060 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8866094-2e6c-4147-ae24-b3051ac32108" containerID="fd912c10308b73573d22169e4be213c1b1de66a54bd65ab840a438adff1c4127" exitCode=0 Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.054100 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddd2-account-create-update-8vcg8" event={"ID":"c8866094-2e6c-4147-ae24-b3051ac32108","Type":"ContainerDied","Data":"fd912c10308b73573d22169e4be213c1b1de66a54bd65ab840a438adff1c4127"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.054172 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddd2-account-create-update-8vcg8" event={"ID":"c8866094-2e6c-4147-ae24-b3051ac32108","Type":"ContainerStarted","Data":"4c79b880b926b6da674674871beaee0b25bd912e7edfe67affc78b4ff5f6f93e"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.055929 4679 generic.go:334] "Generic (PLEG): container finished" podID="a2a1137b-9229-42b1-8764-7169cfc309f9" containerID="d32281488958b1b73df3fdda0a906d1af9ccf4066be198a0497b77971418b74c" exitCode=0 Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.056052 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2dwgx" event={"ID":"a2a1137b-9229-42b1-8764-7169cfc309f9","Type":"ContainerDied","Data":"d32281488958b1b73df3fdda0a906d1af9ccf4066be198a0497b77971418b74c"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.057927 4679 generic.go:334] "Generic (PLEG): container finished" podID="20466e92-2a82-420b-b597-9040869317ec" containerID="47c5dc908b7b831011fbaea724b9bc94327a992123de3a47abf0977b5f8ee627" exitCode=0 Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.057988 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5s9wx" event={"ID":"20466e92-2a82-420b-b597-9040869317ec","Type":"ContainerDied","Data":"47c5dc908b7b831011fbaea724b9bc94327a992123de3a47abf0977b5f8ee627"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.058013 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5s9wx" event={"ID":"20466e92-2a82-420b-b597-9040869317ec","Type":"ContainerStarted","Data":"024149b217f6667a6c9e730f106434003ba11392ff43c7b77744b5a18e315ad9"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.060244 4679 generic.go:334] "Generic (PLEG): container finished" podID="ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" containerID="5e511c40dc14bb7e975ef8a30c06a48f2dbdc07d4decd4b63148eea179a40633" exitCode=0 Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.060274 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5pnsw" event={"ID":"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987","Type":"ContainerDied","Data":"5e511c40dc14bb7e975ef8a30c06a48f2dbdc07d4decd4b63148eea179a40633"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.060299 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5pnsw" event={"ID":"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987","Type":"ContainerStarted","Data":"e10ed34c4312bcc757d30db7c933aeb3b83f7b6aad680d5f5696b9d7ba77f973"} Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.074552 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0d49-account-create-update-dz92r" podStartSLOduration=2.074532585 podStartE2EDuration="2.074532585s" podCreationTimestamp="2026-02-03 12:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:49.068801756 +0000 UTC m=+1101.543697844" watchObservedRunningTime="2026-02-03 12:23:49.074532585 +0000 UTC m=+1101.549428673" Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.571832 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.680304 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:49 crc kubenswrapper[4679]: I0203 12:23:49.680701 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="dnsmasq-dns" containerID="cri-o://72da974a1427da770435f77b77e79741fe89d8d757129fad84deac6754624ac1" gracePeriod=10 Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.071955 4679 generic.go:334] "Generic (PLEG): container finished" podID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerID="72da974a1427da770435f77b77e79741fe89d8d757129fad84deac6754624ac1" exitCode=0 Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.072159 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" event={"ID":"ced8c228-de1b-4af4-b503-94b1c05499a8","Type":"ContainerDied","Data":"72da974a1427da770435f77b77e79741fe89d8d757129fad84deac6754624ac1"} Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.074198 4679 generic.go:334] "Generic (PLEG): container finished" podID="2e11319e-fc62-4971-a341-f8c39b7843cb" containerID="592671ead8dfdcd3f46fdef7d5cb7bcbbe8d1e00f4e0101b939950b81f560048" exitCode=0 Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.074957 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d49-account-create-update-dz92r" event={"ID":"2e11319e-fc62-4971-a341-f8c39b7843cb","Type":"ContainerDied","Data":"592671ead8dfdcd3f46fdef7d5cb7bcbbe8d1e00f4e0101b939950b81f560048"} Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.163436 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.253977 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config\") pod \"ced8c228-de1b-4af4-b503-94b1c05499a8\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.254126 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l52b4\" (UniqueName: \"kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4\") pod \"ced8c228-de1b-4af4-b503-94b1c05499a8\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.254148 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc\") pod \"ced8c228-de1b-4af4-b503-94b1c05499a8\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.254186 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb\") pod \"ced8c228-de1b-4af4-b503-94b1c05499a8\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.254204 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb\") pod \"ced8c228-de1b-4af4-b503-94b1c05499a8\" (UID: \"ced8c228-de1b-4af4-b503-94b1c05499a8\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.269760 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4" (OuterVolumeSpecName: "kube-api-access-l52b4") pod "ced8c228-de1b-4af4-b503-94b1c05499a8" (UID: "ced8c228-de1b-4af4-b503-94b1c05499a8"). InnerVolumeSpecName "kube-api-access-l52b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.302692 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ced8c228-de1b-4af4-b503-94b1c05499a8" (UID: "ced8c228-de1b-4af4-b503-94b1c05499a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.308163 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config" (OuterVolumeSpecName: "config") pod "ced8c228-de1b-4af4-b503-94b1c05499a8" (UID: "ced8c228-de1b-4af4-b503-94b1c05499a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.324963 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ced8c228-de1b-4af4-b503-94b1c05499a8" (UID: "ced8c228-de1b-4af4-b503-94b1c05499a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.337016 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ced8c228-de1b-4af4-b503-94b1c05499a8" (UID: "ced8c228-de1b-4af4-b503-94b1c05499a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.356565 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l52b4\" (UniqueName: \"kubernetes.io/projected/ced8c228-de1b-4af4-b503-94b1c05499a8-kube-api-access-l52b4\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.356588 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.356599 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.356607 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.356615 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced8c228-de1b-4af4-b503-94b1c05499a8-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.446873 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.560009 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts\") pod \"a2a1137b-9229-42b1-8764-7169cfc309f9\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.560151 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhgg6\" (UniqueName: \"kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6\") pod \"a2a1137b-9229-42b1-8764-7169cfc309f9\" (UID: \"a2a1137b-9229-42b1-8764-7169cfc309f9\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.560556 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2a1137b-9229-42b1-8764-7169cfc309f9" (UID: "a2a1137b-9229-42b1-8764-7169cfc309f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.560917 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2a1137b-9229-42b1-8764-7169cfc309f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.564910 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6" (OuterVolumeSpecName: "kube-api-access-lhgg6") pod "a2a1137b-9229-42b1-8764-7169cfc309f9" (UID: "a2a1137b-9229-42b1-8764-7169cfc309f9"). InnerVolumeSpecName "kube-api-access-lhgg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.624754 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.651376 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.656421 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.663842 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrtnj\" (UniqueName: \"kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj\") pod \"c8866094-2e6c-4147-ae24-b3051ac32108\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.664096 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts\") pod \"c8866094-2e6c-4147-ae24-b3051ac32108\" (UID: \"c8866094-2e6c-4147-ae24-b3051ac32108\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.664504 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhgg6\" (UniqueName: \"kubernetes.io/projected/a2a1137b-9229-42b1-8764-7169cfc309f9-kube-api-access-lhgg6\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.664570 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8866094-2e6c-4147-ae24-b3051ac32108" (UID: "c8866094-2e6c-4147-ae24-b3051ac32108"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.670130 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj" (OuterVolumeSpecName: "kube-api-access-zrtnj") pod "c8866094-2e6c-4147-ae24-b3051ac32108" (UID: "c8866094-2e6c-4147-ae24-b3051ac32108"). InnerVolumeSpecName "kube-api-access-zrtnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.671221 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766056 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts\") pod \"20466e92-2a82-420b-b597-9040869317ec\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766121 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts\") pod \"0dac64e9-0810-4130-b75e-9711ce1ab490\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766273 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7xr\" (UniqueName: \"kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr\") pod \"0dac64e9-0810-4130-b75e-9711ce1ab490\" (UID: \"0dac64e9-0810-4130-b75e-9711ce1ab490\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766332 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts\") pod \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766383 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbtkt\" (UniqueName: \"kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt\") pod \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\" (UID: \"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766423 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg8dl\" (UniqueName: \"kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl\") pod \"20466e92-2a82-420b-b597-9040869317ec\" (UID: \"20466e92-2a82-420b-b597-9040869317ec\") " Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766777 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrtnj\" (UniqueName: \"kubernetes.io/projected/c8866094-2e6c-4147-ae24-b3051ac32108-kube-api-access-zrtnj\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.766797 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8866094-2e6c-4147-ae24-b3051ac32108-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.767819 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" (UID: "ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.767853 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20466e92-2a82-420b-b597-9040869317ec" (UID: "20466e92-2a82-420b-b597-9040869317ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.767863 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dac64e9-0810-4130-b75e-9711ce1ab490" (UID: "0dac64e9-0810-4130-b75e-9711ce1ab490"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.770294 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr" (OuterVolumeSpecName: "kube-api-access-6r7xr") pod "0dac64e9-0810-4130-b75e-9711ce1ab490" (UID: "0dac64e9-0810-4130-b75e-9711ce1ab490"). InnerVolumeSpecName "kube-api-access-6r7xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.770353 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt" (OuterVolumeSpecName: "kube-api-access-cbtkt") pod "ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" (UID: "ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987"). InnerVolumeSpecName "kube-api-access-cbtkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.770804 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl" (OuterVolumeSpecName: "kube-api-access-dg8dl") pod "20466e92-2a82-420b-b597-9040869317ec" (UID: "20466e92-2a82-420b-b597-9040869317ec"). InnerVolumeSpecName "kube-api-access-dg8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868313 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7xr\" (UniqueName: \"kubernetes.io/projected/0dac64e9-0810-4130-b75e-9711ce1ab490-kube-api-access-6r7xr\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868348 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868374 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbtkt\" (UniqueName: \"kubernetes.io/projected/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987-kube-api-access-cbtkt\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868384 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg8dl\" (UniqueName: \"kubernetes.io/projected/20466e92-2a82-420b-b597-9040869317ec-kube-api-access-dg8dl\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868392 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20466e92-2a82-420b-b597-9040869317ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:50 crc kubenswrapper[4679]: I0203 12:23:50.868402 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dac64e9-0810-4130-b75e-9711ce1ab490-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.082974 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5s9wx" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.082975 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5s9wx" event={"ID":"20466e92-2a82-420b-b597-9040869317ec","Type":"ContainerDied","Data":"024149b217f6667a6c9e730f106434003ba11392ff43c7b77744b5a18e315ad9"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.083121 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="024149b217f6667a6c9e730f106434003ba11392ff43c7b77744b5a18e315ad9" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.084647 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5pnsw" event={"ID":"ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987","Type":"ContainerDied","Data":"e10ed34c4312bcc757d30db7c933aeb3b83f7b6aad680d5f5696b9d7ba77f973"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.084685 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10ed34c4312bcc757d30db7c933aeb3b83f7b6aad680d5f5696b9d7ba77f973" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.084667 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5pnsw" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.086212 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" event={"ID":"ced8c228-de1b-4af4-b503-94b1c05499a8","Type":"ContainerDied","Data":"2d7792cb845607d5652795b4d9ac3abe793095cffa5a0d3d2227d17b1984d777"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.086253 4679 scope.go:117] "RemoveContainer" containerID="72da974a1427da770435f77b77e79741fe89d8d757129fad84deac6754624ac1" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.086282 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4k4sm" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.089040 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-efd2-account-create-update-nn9tr" event={"ID":"0dac64e9-0810-4130-b75e-9711ce1ab490","Type":"ContainerDied","Data":"e37c6086cc360a9423ed89f116183ef846998fbb9549708c3f8c6a4711ef7f45"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.089067 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-efd2-account-create-update-nn9tr" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.089082 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37c6086cc360a9423ed89f116183ef846998fbb9549708c3f8c6a4711ef7f45" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.092276 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddd2-account-create-update-8vcg8" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.092581 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddd2-account-create-update-8vcg8" event={"ID":"c8866094-2e6c-4147-ae24-b3051ac32108","Type":"ContainerDied","Data":"4c79b880b926b6da674674871beaee0b25bd912e7edfe67affc78b4ff5f6f93e"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.092615 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c79b880b926b6da674674871beaee0b25bd912e7edfe67affc78b4ff5f6f93e" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.094825 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2dwgx" event={"ID":"a2a1137b-9229-42b1-8764-7169cfc309f9","Type":"ContainerDied","Data":"e3903ab89c5cb41471b09e7246de3c9560f65e3db2af0408ebfd547ffecec3a7"} Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.094863 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3903ab89c5cb41471b09e7246de3c9560f65e3db2af0408ebfd547ffecec3a7" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.094969 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2dwgx" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.123332 4679 scope.go:117] "RemoveContainer" containerID="ea879df1cd1ef440513a0daf1332e4e1e01ee83b703ffa3685b0f27372c21803" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.125814 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.135601 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4k4sm"] Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.486389 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.579905 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2mmw\" (UniqueName: \"kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw\") pod \"2e11319e-fc62-4971-a341-f8c39b7843cb\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.580034 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts\") pod \"2e11319e-fc62-4971-a341-f8c39b7843cb\" (UID: \"2e11319e-fc62-4971-a341-f8c39b7843cb\") " Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.580520 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e11319e-fc62-4971-a341-f8c39b7843cb" (UID: "2e11319e-fc62-4971-a341-f8c39b7843cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.584618 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw" (OuterVolumeSpecName: "kube-api-access-l2mmw") pod "2e11319e-fc62-4971-a341-f8c39b7843cb" (UID: "2e11319e-fc62-4971-a341-f8c39b7843cb"). InnerVolumeSpecName "kube-api-access-l2mmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.682196 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2mmw\" (UniqueName: \"kubernetes.io/projected/2e11319e-fc62-4971-a341-f8c39b7843cb-kube-api-access-l2mmw\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:51 crc kubenswrapper[4679]: I0203 12:23:51.682243 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e11319e-fc62-4971-a341-f8c39b7843cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.106684 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d49-account-create-update-dz92r" event={"ID":"2e11319e-fc62-4971-a341-f8c39b7843cb","Type":"ContainerDied","Data":"a126c1dcaf0237284b5332dad3c96682e4a202692e65da5011bb2c88476ff715"} Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.107892 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a126c1dcaf0237284b5332dad3c96682e4a202692e65da5011bb2c88476ff715" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.106772 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d49-account-create-update-dz92r" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.229303 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" path="/var/lib/kubelet/pods/ced8c228-de1b-4af4-b503-94b1c05499a8/volumes" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564165 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-flh2f"] Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564697 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564721 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564734 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="init" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564742 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="init" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564758 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dac64e9-0810-4130-b75e-9711ce1ab490" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564766 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dac64e9-0810-4130-b75e-9711ce1ab490" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564779 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a1137b-9229-42b1-8764-7169cfc309f9" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564785 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a1137b-9229-42b1-8764-7169cfc309f9" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564798 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="dnsmasq-dns" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564806 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="dnsmasq-dns" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564822 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8866094-2e6c-4147-ae24-b3051ac32108" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564829 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8866094-2e6c-4147-ae24-b3051ac32108" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564855 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20466e92-2a82-420b-b597-9040869317ec" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564864 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="20466e92-2a82-420b-b597-9040869317ec" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: E0203 12:23:52.564873 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11319e-fc62-4971-a341-f8c39b7843cb" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.564881 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11319e-fc62-4971-a341-f8c39b7843cb" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565064 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a1137b-9229-42b1-8764-7169cfc309f9" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565077 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced8c228-de1b-4af4-b503-94b1c05499a8" containerName="dnsmasq-dns" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565092 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8866094-2e6c-4147-ae24-b3051ac32108" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565106 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e11319e-fc62-4971-a341-f8c39b7843cb" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565119 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565132 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="20466e92-2a82-420b-b597-9040869317ec" containerName="mariadb-database-create" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565148 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dac64e9-0810-4130-b75e-9711ce1ab490" containerName="mariadb-account-create-update" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.565846 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.568663 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.577332 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-flh2f"] Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.578523 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fhx4w" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.599845 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.599935 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.599995 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqnj\" (UniqueName: \"kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.600116 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.701719 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.701814 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.701846 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.701874 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqnj\" (UniqueName: \"kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.707303 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.711328 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.713032 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.723605 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqnj\" (UniqueName: \"kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj\") pod \"glance-db-sync-flh2f\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:52 crc kubenswrapper[4679]: I0203 12:23:52.884745 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-flh2f" Feb 03 12:23:53 crc kubenswrapper[4679]: I0203 12:23:53.239059 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-flh2f"] Feb 03 12:23:53 crc kubenswrapper[4679]: W0203 12:23:53.244574 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d9b136b_ec91_4486_af62_ec1f49e4e010.slice/crio-b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6 WatchSource:0}: Error finding container b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6: Status 404 returned error can't find the container with id b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6 Feb 03 12:23:53 crc kubenswrapper[4679]: I0203 12:23:53.944543 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btzss"] Feb 03 12:23:53 crc kubenswrapper[4679]: I0203 12:23:53.947158 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btzss" Feb 03 12:23:53 crc kubenswrapper[4679]: I0203 12:23:53.950233 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 12:23:53 crc kubenswrapper[4679]: I0203 12:23:53.956801 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btzss"] Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.024756 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.024932 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdn7d\" (UniqueName: \"kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.126453 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.126525 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdn7d\" (UniqueName: \"kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.127739 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.127917 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-flh2f" event={"ID":"2d9b136b-ec91-4486-af62-ec1f49e4e010","Type":"ContainerStarted","Data":"b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6"} Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.148335 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdn7d\" (UniqueName: \"kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d\") pod \"root-account-create-update-btzss\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.272523 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btzss" Feb 03 12:23:54 crc kubenswrapper[4679]: I0203 12:23:54.729634 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btzss"] Feb 03 12:23:54 crc kubenswrapper[4679]: W0203 12:23:54.739441 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda18f746e_8e05_4feb_9e42_e3ab327827b4.slice/crio-005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90 WatchSource:0}: Error finding container 005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90: Status 404 returned error can't find the container with id 005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90 Feb 03 12:23:55 crc kubenswrapper[4679]: I0203 12:23:55.140774 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btzss" event={"ID":"a18f746e-8e05-4feb-9e42-e3ab327827b4","Type":"ContainerStarted","Data":"05746132873d2074ba0b5f422e47693c5787b44c71f8e9c08fb30a223f176170"} Feb 03 12:23:55 crc kubenswrapper[4679]: I0203 12:23:55.141158 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btzss" event={"ID":"a18f746e-8e05-4feb-9e42-e3ab327827b4","Type":"ContainerStarted","Data":"005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90"} Feb 03 12:23:55 crc kubenswrapper[4679]: I0203 12:23:55.158965 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-btzss" podStartSLOduration=2.158947802 podStartE2EDuration="2.158947802s" podCreationTimestamp="2026-02-03 12:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:23:55.158307075 +0000 UTC m=+1107.633203183" watchObservedRunningTime="2026-02-03 12:23:55.158947802 +0000 UTC m=+1107.633843890" Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.152592 4679 generic.go:334] "Generic (PLEG): container finished" podID="43821977-e5d9-4405-b6c6-d739a8fea389" containerID="7e69e45aafc1ec7b5975840b44e7edf37fc24582d9132fd7dcd2f93643239816" exitCode=0 Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.152661 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7mtlb" event={"ID":"43821977-e5d9-4405-b6c6-d739a8fea389","Type":"ContainerDied","Data":"7e69e45aafc1ec7b5975840b44e7edf37fc24582d9132fd7dcd2f93643239816"} Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.154659 4679 generic.go:334] "Generic (PLEG): container finished" podID="a18f746e-8e05-4feb-9e42-e3ab327827b4" containerID="05746132873d2074ba0b5f422e47693c5787b44c71f8e9c08fb30a223f176170" exitCode=0 Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.154702 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btzss" event={"ID":"a18f746e-8e05-4feb-9e42-e3ab327827b4","Type":"ContainerDied","Data":"05746132873d2074ba0b5f422e47693c5787b44c71f8e9c08fb30a223f176170"} Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.368078 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.384906 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/17121344-4061-43d2-bf89-7a3684b88461-etc-swift\") pod \"swift-storage-0\" (UID: \"17121344-4061-43d2-bf89-7a3684b88461\") " pod="openstack/swift-storage-0" Feb 03 12:23:56 crc kubenswrapper[4679]: I0203 12:23:56.679544 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.166622 4679 generic.go:334] "Generic (PLEG): container finished" podID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerID="8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3" exitCode=0 Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.166949 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerDied","Data":"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3"} Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.177243 4679 generic.go:334] "Generic (PLEG): container finished" podID="73f156fc-e458-470c-ad7b-24125be5762c" containerID="c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934" exitCode=0 Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.177330 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerDied","Data":"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934"} Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.405228 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.630753 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.683907 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btzss" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702477 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702536 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702578 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702621 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts\") pod \"a18f746e-8e05-4feb-9e42-e3ab327827b4\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702664 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702712 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdn7d\" (UniqueName: \"kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d\") pod \"a18f746e-8e05-4feb-9e42-e3ab327827b4\" (UID: \"a18f746e-8e05-4feb-9e42-e3ab327827b4\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702751 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702778 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7pg8\" (UniqueName: \"kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.702813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf\") pod \"43821977-e5d9-4405-b6c6-d739a8fea389\" (UID: \"43821977-e5d9-4405-b6c6-d739a8fea389\") " Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.704229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a18f746e-8e05-4feb-9e42-e3ab327827b4" (UID: "a18f746e-8e05-4feb-9e42-e3ab327827b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.708640 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.708960 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.713781 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8" (OuterVolumeSpecName: "kube-api-access-v7pg8") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "kube-api-access-v7pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.721350 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d" (OuterVolumeSpecName: "kube-api-access-kdn7d") pod "a18f746e-8e05-4feb-9e42-e3ab327827b4" (UID: "a18f746e-8e05-4feb-9e42-e3ab327827b4"). InnerVolumeSpecName "kube-api-access-kdn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.722795 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.738868 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.741844 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.754207 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts" (OuterVolumeSpecName: "scripts") pod "43821977-e5d9-4405-b6c6-d739a8fea389" (UID: "43821977-e5d9-4405-b6c6-d739a8fea389"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.805430 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.805948 4679 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/43821977-e5d9-4405-b6c6-d739a8fea389-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.805993 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806026 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a18f746e-8e05-4feb-9e42-e3ab327827b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806059 4679 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/43821977-e5d9-4405-b6c6-d739a8fea389-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806072 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdn7d\" (UniqueName: \"kubernetes.io/projected/a18f746e-8e05-4feb-9e42-e3ab327827b4-kube-api-access-kdn7d\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806086 4679 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806099 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7pg8\" (UniqueName: \"kubernetes.io/projected/43821977-e5d9-4405-b6c6-d739a8fea389-kube-api-access-v7pg8\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:57 crc kubenswrapper[4679]: I0203 12:23:57.806114 4679 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/43821977-e5d9-4405-b6c6-d739a8fea389-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.197092 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerStarted","Data":"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8"} Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.197469 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.236801 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.93126503 podStartE2EDuration="56.236781011s" podCreationTimestamp="2026-02-03 12:23:02 +0000 UTC" firstStartedPulling="2026-02-03 12:23:10.899192423 +0000 UTC m=+1063.374088511" lastFinishedPulling="2026-02-03 12:23:23.204708404 +0000 UTC m=+1075.679604492" observedRunningTime="2026-02-03 12:23:58.221316427 +0000 UTC m=+1110.696212545" watchObservedRunningTime="2026-02-03 12:23:58.236781011 +0000 UTC m=+1110.711677099" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.245973 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7mtlb" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.246038 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btzss" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247586 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"73b12f7237003dfd1d679602055dc6bfad085003fa20822275f574a8a68c5292"} Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247624 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerStarted","Data":"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7"} Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247641 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7mtlb" event={"ID":"43821977-e5d9-4405-b6c6-d739a8fea389","Type":"ContainerDied","Data":"ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729"} Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247658 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb7808014b6304df5f78dba17ae484f3382778fa558795a2ba8b2f1b80d9729" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247670 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btzss" event={"ID":"a18f746e-8e05-4feb-9e42-e3ab327827b4","Type":"ContainerDied","Data":"005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90"} Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.247702 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="005c6b67db8d3e5e7f1d68af9399ccaff2dcd555aaedcbfe565956ff237b4e90" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.248019 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.385837 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.121919817 podStartE2EDuration="56.385818502s" podCreationTimestamp="2026-02-03 12:23:02 +0000 UTC" firstStartedPulling="2026-02-03 12:23:10.912620383 +0000 UTC m=+1063.387516471" lastFinishedPulling="2026-02-03 12:23:23.176519068 +0000 UTC m=+1075.651415156" observedRunningTime="2026-02-03 12:23:58.385187905 +0000 UTC m=+1110.860084013" watchObservedRunningTime="2026-02-03 12:23:58.385818502 +0000 UTC m=+1110.860714580" Feb 03 12:23:58 crc kubenswrapper[4679]: I0203 12:23:58.476860 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 12:24:00 crc kubenswrapper[4679]: I0203 12:24:00.262053 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btzss"] Feb 03 12:24:00 crc kubenswrapper[4679]: I0203 12:24:00.267903 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btzss"] Feb 03 12:24:02 crc kubenswrapper[4679]: I0203 12:24:02.246732 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18f746e-8e05-4feb-9e42-e3ab327827b4" path="/var/lib/kubelet/pods/a18f746e-8e05-4feb-9e42-e3ab327827b4/volumes" Feb 03 12:24:02 crc kubenswrapper[4679]: I0203 12:24:02.681008 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5tt4c" podUID="c908c598-a229-467c-8430-de77205f95ec" containerName="ovn-controller" probeResult="failure" output=< Feb 03 12:24:02 crc kubenswrapper[4679]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 12:24:02 crc kubenswrapper[4679]: > Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.254708 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9xc68"] Feb 03 12:24:05 crc kubenswrapper[4679]: E0203 12:24:05.256794 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18f746e-8e05-4feb-9e42-e3ab327827b4" containerName="mariadb-account-create-update" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.256828 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18f746e-8e05-4feb-9e42-e3ab327827b4" containerName="mariadb-account-create-update" Feb 03 12:24:05 crc kubenswrapper[4679]: E0203 12:24:05.256875 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43821977-e5d9-4405-b6c6-d739a8fea389" containerName="swift-ring-rebalance" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.256884 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="43821977-e5d9-4405-b6c6-d739a8fea389" containerName="swift-ring-rebalance" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.257113 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18f746e-8e05-4feb-9e42-e3ab327827b4" containerName="mariadb-account-create-update" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.257133 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="43821977-e5d9-4405-b6c6-d739a8fea389" containerName="swift-ring-rebalance" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.258205 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.267661 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.275666 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9xc68"] Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.368834 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8clg9\" (UniqueName: \"kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.368920 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.471547 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8clg9\" (UniqueName: \"kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.471662 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.472667 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.496690 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8clg9\" (UniqueName: \"kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9\") pod \"root-account-create-update-9xc68\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:05 crc kubenswrapper[4679]: I0203 12:24:05.590962 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:07 crc kubenswrapper[4679]: I0203 12:24:07.692110 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5tt4c" podUID="c908c598-a229-467c-8430-de77205f95ec" containerName="ovn-controller" probeResult="failure" output=< Feb 03 12:24:07 crc kubenswrapper[4679]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 12:24:07 crc kubenswrapper[4679]: > Feb 03 12:24:07 crc kubenswrapper[4679]: I0203 12:24:07.796657 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:24:07 crc kubenswrapper[4679]: I0203 12:24:07.803878 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9zrqh" Feb 03 12:24:07 crc kubenswrapper[4679]: I0203 12:24:07.817606 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9xc68"] Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.061149 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5tt4c-config-v9qjr"] Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.062294 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.066664 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.087476 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tt4c-config-v9qjr"] Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236261 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236336 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236430 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236509 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236678 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ctwc\" (UniqueName: \"kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.236747 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.338841 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.339484 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ctwc\" (UniqueName: \"kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.339582 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.339727 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.339762 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.339796 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.340230 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.341353 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.343147 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.343505 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.346657 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.356672 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9xc68" event={"ID":"72232599-3fc6-423f-a36f-d684a7b77fef","Type":"ContainerStarted","Data":"bf346f95cc09f5a770e84bbb89414120a7dd474dc62b88cc7f35e817df969490"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.356747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9xc68" event={"ID":"72232599-3fc6-423f-a36f-d684a7b77fef","Type":"ContainerStarted","Data":"3ca8f1efc4915ec023c9a40f601cb08960bb0040b79ba49c83042fa2af4d62ed"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.359344 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-flh2f" event={"ID":"2d9b136b-ec91-4486-af62-ec1f49e4e010","Type":"ContainerStarted","Data":"702dfef749bd9bf0497f8de1ebe892d3ea127ea521fedcc0bc24f24ba77bf6e9"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.365515 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"ca4a1ef26c71ca452ffd55230a6f416fa7bdb2740bb71df27a1f550b88af1d60"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.365608 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"cdcc5ecb721d45095ebd57c5c465cc13d21dd0131b05dd696190d7e8f378f658"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.365622 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"e0a7e510a6dd5de469055e421cb1a5c9bacee7e4e759479e170f40b627a9cb0c"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.365640 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"21e249a6be358c08582953a25e5851ebb9b38331216c14359ed199f00941251f"} Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.374386 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ctwc\" (UniqueName: \"kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc\") pod \"ovn-controller-5tt4c-config-v9qjr\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.382155 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.388143 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9xc68" podStartSLOduration=3.3881073600000002 podStartE2EDuration="3.38810736s" podCreationTimestamp="2026-02-03 12:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:08.376569459 +0000 UTC m=+1120.851465567" watchObservedRunningTime="2026-02-03 12:24:08.38810736 +0000 UTC m=+1120.863003448" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.881829 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-flh2f" podStartSLOduration=2.777159685 podStartE2EDuration="16.881808811s" podCreationTimestamp="2026-02-03 12:23:52 +0000 UTC" firstStartedPulling="2026-02-03 12:23:53.248302478 +0000 UTC m=+1105.723198566" lastFinishedPulling="2026-02-03 12:24:07.352951604 +0000 UTC m=+1119.827847692" observedRunningTime="2026-02-03 12:24:08.406693216 +0000 UTC m=+1120.881589304" watchObservedRunningTime="2026-02-03 12:24:08.881808811 +0000 UTC m=+1121.356704899" Feb 03 12:24:08 crc kubenswrapper[4679]: I0203 12:24:08.885990 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5tt4c-config-v9qjr"] Feb 03 12:24:09 crc kubenswrapper[4679]: I0203 12:24:09.378491 4679 generic.go:334] "Generic (PLEG): container finished" podID="72232599-3fc6-423f-a36f-d684a7b77fef" containerID="bf346f95cc09f5a770e84bbb89414120a7dd474dc62b88cc7f35e817df969490" exitCode=0 Feb 03 12:24:09 crc kubenswrapper[4679]: I0203 12:24:09.378664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9xc68" event={"ID":"72232599-3fc6-423f-a36f-d684a7b77fef","Type":"ContainerDied","Data":"bf346f95cc09f5a770e84bbb89414120a7dd474dc62b88cc7f35e817df969490"} Feb 03 12:24:09 crc kubenswrapper[4679]: I0203 12:24:09.382385 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c-config-v9qjr" event={"ID":"3474419b-59cd-40ce-90ef-f626b21204e4","Type":"ContainerStarted","Data":"6b076044091aa23937d7b37ecc22539c78017a31fd0632f4be7bd60bd09ad25c"} Feb 03 12:24:09 crc kubenswrapper[4679]: I0203 12:24:09.382459 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c-config-v9qjr" event={"ID":"3474419b-59cd-40ce-90ef-f626b21204e4","Type":"ContainerStarted","Data":"3a7ffe27dd70307f7113b165ebf8e6134ecf44c334edcf440e2a26b506167d04"} Feb 03 12:24:09 crc kubenswrapper[4679]: I0203 12:24:09.433855 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5tt4c-config-v9qjr" podStartSLOduration=1.433823112 podStartE2EDuration="1.433823112s" podCreationTimestamp="2026-02-03 12:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:09.43028613 +0000 UTC m=+1121.905182228" watchObservedRunningTime="2026-02-03 12:24:09.433823112 +0000 UTC m=+1121.908719200" Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.415876 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"4b897c16a15f7ddbef85f9590287b10dc8ce059385aa868916e133147564b153"} Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.417020 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"e2d661a90b5e21d0347d01b879ad34aac3711135066d380c6162a35d45fc06a1"} Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.417038 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"b2dd613d7e41da6a8e2ddb01fe62d0af1ef3fb9edf6af4be6918b70721fb043a"} Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.417799 4679 generic.go:334] "Generic (PLEG): container finished" podID="3474419b-59cd-40ce-90ef-f626b21204e4" containerID="6b076044091aa23937d7b37ecc22539c78017a31fd0632f4be7bd60bd09ad25c" exitCode=0 Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.417919 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c-config-v9qjr" event={"ID":"3474419b-59cd-40ce-90ef-f626b21204e4","Type":"ContainerDied","Data":"6b076044091aa23937d7b37ecc22539c78017a31fd0632f4be7bd60bd09ad25c"} Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.860906 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.990683 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts\") pod \"72232599-3fc6-423f-a36f-d684a7b77fef\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.990738 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8clg9\" (UniqueName: \"kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9\") pod \"72232599-3fc6-423f-a36f-d684a7b77fef\" (UID: \"72232599-3fc6-423f-a36f-d684a7b77fef\") " Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.992274 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72232599-3fc6-423f-a36f-d684a7b77fef" (UID: "72232599-3fc6-423f-a36f-d684a7b77fef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:10 crc kubenswrapper[4679]: I0203 12:24:10.997664 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9" (OuterVolumeSpecName: "kube-api-access-8clg9") pod "72232599-3fc6-423f-a36f-d684a7b77fef" (UID: "72232599-3fc6-423f-a36f-d684a7b77fef"). InnerVolumeSpecName "kube-api-access-8clg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.092820 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72232599-3fc6-423f-a36f-d684a7b77fef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.092872 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8clg9\" (UniqueName: \"kubernetes.io/projected/72232599-3fc6-423f-a36f-d684a7b77fef-kube-api-access-8clg9\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.440152 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"49ceeb9a816e43f1d58c832c873bcf99442bb03fde27d572d673de356cfbca4e"} Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.442324 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9xc68" event={"ID":"72232599-3fc6-423f-a36f-d684a7b77fef","Type":"ContainerDied","Data":"3ca8f1efc4915ec023c9a40f601cb08960bb0040b79ba49c83042fa2af4d62ed"} Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.442390 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9xc68" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.442412 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca8f1efc4915ec023c9a40f601cb08960bb0040b79ba49c83042fa2af4d62ed" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.851262 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.906268 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.906928 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.906975 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ctwc\" (UniqueName: \"kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.906998 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.907038 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.907067 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts\") pod \"3474419b-59cd-40ce-90ef-f626b21204e4\" (UID: \"3474419b-59cd-40ce-90ef-f626b21204e4\") " Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.906611 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.908344 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.908419 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run" (OuterVolumeSpecName: "var-run") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.908882 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.909636 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts" (OuterVolumeSpecName: "scripts") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:11 crc kubenswrapper[4679]: I0203 12:24:11.913434 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc" (OuterVolumeSpecName: "kube-api-access-6ctwc") pod "3474419b-59cd-40ce-90ef-f626b21204e4" (UID: "3474419b-59cd-40ce-90ef-f626b21204e4"). InnerVolumeSpecName "kube-api-access-6ctwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009581 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009618 4679 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3474419b-59cd-40ce-90ef-f626b21204e4-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009630 4679 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009641 4679 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009650 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ctwc\" (UniqueName: \"kubernetes.io/projected/3474419b-59cd-40ce-90ef-f626b21204e4-kube-api-access-6ctwc\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.009660 4679 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3474419b-59cd-40ce-90ef-f626b21204e4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.536758 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"a416f899acfc0a2d9c668071158d5483ab68cf53c5e072e5ddc3bf0d9f2b0bcc"} Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.538289 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"0d8d8cabdde307b1e3e2c1943f3a22a55d3b33b1abb1c97e8b494b239ea3ce4b"} Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.538459 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"7f55c8a1ceb16a4a62daaa33bc496368f77b29fa6aab80c5fc919ee61e88c27e"} Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.538580 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"d3fcc423ba9f5c927ce81f46baf94765ed8f5a9144c254940c4bfe5a780e4ed8"} Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.557212 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5tt4c-config-v9qjr" event={"ID":"3474419b-59cd-40ce-90ef-f626b21204e4","Type":"ContainerDied","Data":"3a7ffe27dd70307f7113b165ebf8e6134ecf44c334edcf440e2a26b506167d04"} Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.557266 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7ffe27dd70307f7113b165ebf8e6134ecf44c334edcf440e2a26b506167d04" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.557379 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5tt4c-config-v9qjr" Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.602696 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5tt4c-config-v9qjr"] Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.621312 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5tt4c-config-v9qjr"] Feb 03 12:24:12 crc kubenswrapper[4679]: I0203 12:24:12.694419 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5tt4c" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.575139 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"6d76afb48407230ec2c1fa1b575cf3b5d8531b848b5f9dd8e571d223771ee82a"} Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.575595 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"1c62ee56a291943f0ba4ca8f99967e9430d0a253ff90ad3b305128c3be7ed34f"} Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.575608 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"17121344-4061-43d2-bf89-7a3684b88461","Type":"ContainerStarted","Data":"06947bd0d4498c056d23864d29c7371b9d0ab81e31d5305dfe5709991794781f"} Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.622528 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.358649362 podStartE2EDuration="34.622495864s" podCreationTimestamp="2026-02-03 12:23:39 +0000 UTC" firstStartedPulling="2026-02-03 12:23:57.437302148 +0000 UTC m=+1109.912198236" lastFinishedPulling="2026-02-03 12:24:11.70114865 +0000 UTC m=+1124.176044738" observedRunningTime="2026-02-03 12:24:13.617426691 +0000 UTC m=+1126.092322799" watchObservedRunningTime="2026-02-03 12:24:13.622495864 +0000 UTC m=+1126.097391952" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.884523 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.937406 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:24:13 crc kubenswrapper[4679]: E0203 12:24:13.937826 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72232599-3fc6-423f-a36f-d684a7b77fef" containerName="mariadb-account-create-update" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.937844 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="72232599-3fc6-423f-a36f-d684a7b77fef" containerName="mariadb-account-create-update" Feb 03 12:24:13 crc kubenswrapper[4679]: E0203 12:24:13.937860 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3474419b-59cd-40ce-90ef-f626b21204e4" containerName="ovn-config" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.937868 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="3474419b-59cd-40ce-90ef-f626b21204e4" containerName="ovn-config" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.938076 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="72232599-3fc6-423f-a36f-d684a7b77fef" containerName="mariadb-account-create-update" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.938105 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="3474419b-59cd-40ce-90ef-f626b21204e4" containerName="ovn-config" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.939117 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.940654 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 12:24:13 crc kubenswrapper[4679]: I0203 12:24:13.955440 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.055645 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxbm\" (UniqueName: \"kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.055733 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.055766 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.055957 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.056052 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.056128 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157535 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxbm\" (UniqueName: \"kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157622 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157665 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157693 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.157737 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.159010 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.159115 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.159184 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.159272 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.161711 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.212080 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxbm\" (UniqueName: \"kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm\") pod \"dnsmasq-dns-764c5664d7-7b47n\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.225275 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3474419b-59cd-40ce-90ef-f626b21204e4" path="/var/lib/kubelet/pods/3474419b-59cd-40ce-90ef-f626b21204e4/volumes" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.261946 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.362553 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.540078 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fhzpb"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.544337 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.631305 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fhzpb"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.674981 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.675094 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtqh\" (UniqueName: \"kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.756155 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-q8nng"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.757331 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.778398 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.778527 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtqh\" (UniqueName: \"kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.790859 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.814693 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0375-account-create-update-v7bhg"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.816158 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.825734 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.837106 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q8nng"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.857111 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtqh\" (UniqueName: \"kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh\") pod \"barbican-db-create-fhzpb\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.860144 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0375-account-create-update-v7bhg"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.883054 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.883138 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vnh\" (UniqueName: \"kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.920418 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-j8w92"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.921790 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.940886 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.974098 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j8w92"] Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.991537 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.991664 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrgg\" (UniqueName: \"kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.991701 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.991727 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2vnh\" (UniqueName: \"kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:14 crc kubenswrapper[4679]: I0203 12:24:14.993473 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.002821 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8ba4-account-create-update-rp2sp"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.004740 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.013442 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.013471 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ba4-account-create-update-rp2sp"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.023164 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2vnh\" (UniqueName: \"kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh\") pod \"cinder-db-create-q8nng\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:15 crc kubenswrapper[4679]: W0203 12:24:15.047462 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28066d94_f1ec_4f33_ac4f_052d767c8533.slice/crio-dd8dfdc42eeb68a979eb4db25a038c4fbe0b527c9cdb88815ba66edcb3a5339b WatchSource:0}: Error finding container dd8dfdc42eeb68a979eb4db25a038c4fbe0b527c9cdb88815ba66edcb3a5339b: Status 404 returned error can't find the container with id dd8dfdc42eeb68a979eb4db25a038c4fbe0b527c9cdb88815ba66edcb3a5339b Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.056834 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.081257 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-q8h2z"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.082951 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.087770 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hk2x5" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.087953 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.090040 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q8h2z"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.091807 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.095287 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096086 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8j6\" (UniqueName: \"kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096116 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096184 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrgg\" (UniqueName: \"kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096369 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096500 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zz69\" (UniqueName: \"kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.096724 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.101467 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.142437 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrgg\" (UniqueName: \"kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg\") pod \"barbican-0375-account-create-update-v7bhg\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.157903 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.174809 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6efb-account-create-update-ldrrh"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.176721 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.180818 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.182824 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6efb-account-create-update-ldrrh"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198308 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198412 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6t6k\" (UniqueName: \"kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198464 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8j6\" (UniqueName: \"kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198506 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198539 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198606 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.198654 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zz69\" (UniqueName: \"kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.199265 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.203798 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.214088 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.238105 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zz69\" (UniqueName: \"kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69\") pod \"cinder-8ba4-account-create-update-rp2sp\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.239153 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8j6\" (UniqueName: \"kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6\") pod \"neutron-db-create-j8w92\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.251720 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.300638 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.300923 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6t6k\" (UniqueName: \"kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.301091 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.301183 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.301321 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpj5\" (UniqueName: \"kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.306458 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.311155 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.339023 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6t6k\" (UniqueName: \"kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k\") pod \"keystone-db-sync-q8h2z\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.363164 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.403650 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpj5\" (UniqueName: \"kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.403739 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.404852 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.407205 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.438788 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpj5\" (UniqueName: \"kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5\") pod \"neutron-6efb-account-create-update-ldrrh\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.514712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.657587 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerStarted","Data":"0829f58a6a4b8d5e41ea72bd676d2b6f1963f50fbd5fe231692243e816714e8a"} Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.657690 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerStarted","Data":"dd8dfdc42eeb68a979eb4db25a038c4fbe0b527c9cdb88815ba66edcb3a5339b"} Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.807932 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fhzpb"] Feb 03 12:24:15 crc kubenswrapper[4679]: I0203 12:24:15.920862 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q8nng"] Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.092732 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0375-account-create-update-v7bhg"] Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.119724 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-j8w92"] Feb 03 12:24:16 crc kubenswrapper[4679]: W0203 12:24:16.124507 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b5e1e2_7d27_460b_920e_4d131c25b9ff.slice/crio-26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342 WatchSource:0}: Error finding container 26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342: Status 404 returned error can't find the container with id 26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342 Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.298676 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ba4-account-create-update-rp2sp"] Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.339154 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q8h2z"] Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.438665 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6efb-account-create-update-ldrrh"] Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.668261 4679 generic.go:334] "Generic (PLEG): container finished" podID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerID="0829f58a6a4b8d5e41ea72bd676d2b6f1963f50fbd5fe231692243e816714e8a" exitCode=0 Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.668819 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerDied","Data":"0829f58a6a4b8d5e41ea72bd676d2b6f1963f50fbd5fe231692243e816714e8a"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.671456 4679 generic.go:334] "Generic (PLEG): container finished" podID="9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" containerID="2c82de99887a1b41f607a502e596bc05605ed702dbb7ae0cef5e0e22d6a9e70c" exitCode=0 Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.671570 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q8nng" event={"ID":"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2","Type":"ContainerDied","Data":"2c82de99887a1b41f607a502e596bc05605ed702dbb7ae0cef5e0e22d6a9e70c"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.671619 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q8nng" event={"ID":"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2","Type":"ContainerStarted","Data":"6f5ea66ae0d1eb9e975f460758461c6c63c06e2e27dc40f4eeddd45aa2980de9"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.682692 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ba4-account-create-update-rp2sp" event={"ID":"a83970ea-96e1-479c-ac08-e26d41f50ed2","Type":"ContainerStarted","Data":"f5c65c2d45ce008de19161c14559fea430fb3c95e6e13c3cb88bb20c4d1abf1c"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.682750 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ba4-account-create-update-rp2sp" event={"ID":"a83970ea-96e1-479c-ac08-e26d41f50ed2","Type":"ContainerStarted","Data":"040e7ebe41fbed7b5942b8b580ec63b5cb889c8512067bf7c265e06db5e51ff3"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.689706 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q8h2z" event={"ID":"b4982065-33b7-4840-8c29-2e4507cfe43d","Type":"ContainerStarted","Data":"a7a0dc33dc7e0e4f8d14c9e26e2a64c6c39fe00542e57afca2ef256e6bf48260"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.699420 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0375-account-create-update-v7bhg" event={"ID":"34b5e1e2-7d27-460b-920e-4d131c25b9ff","Type":"ContainerStarted","Data":"d210313605fa27b3fc5b64d7354bd1c949fb1a236473054705851e247eb3af44"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.699495 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0375-account-create-update-v7bhg" event={"ID":"34b5e1e2-7d27-460b-920e-4d131c25b9ff","Type":"ContainerStarted","Data":"26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.711432 4679 generic.go:334] "Generic (PLEG): container finished" podID="2b2420e5-7ea0-4879-9e16-1721fc087527" containerID="78643914150be9c0f04b6947172cb80ed83a81dca8865ba3f7f1f29983d9bc50" exitCode=0 Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.711547 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fhzpb" event={"ID":"2b2420e5-7ea0-4879-9e16-1721fc087527","Type":"ContainerDied","Data":"78643914150be9c0f04b6947172cb80ed83a81dca8865ba3f7f1f29983d9bc50"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.711595 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fhzpb" event={"ID":"2b2420e5-7ea0-4879-9e16-1721fc087527","Type":"ContainerStarted","Data":"d08062b90f7c5af4117dfdb2f1558b24372d033950d539bce76c1b6692e82907"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.720568 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6efb-account-create-update-ldrrh" event={"ID":"ab63a515-b3d6-4b33-a9ff-1ed746139a03","Type":"ContainerStarted","Data":"331e9c53dbfd224f879ae02673c235dce433667a014aff2c8d2f158aea9f45fb"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.733317 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j8w92" event={"ID":"08a99d9a-c1b3-43a0-afb7-592a21d29b18","Type":"ContainerStarted","Data":"8e3d725f726e589955f04fc1bb8da4ef219ab6a39bf22d24d02fe582d9948de0"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.733412 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j8w92" event={"ID":"08a99d9a-c1b3-43a0-afb7-592a21d29b18","Type":"ContainerStarted","Data":"213934be7dc2bb194971d076eb0e0c37aaed0ec20d9a17f2edaea93b3794d482"} Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.779642 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8ba4-account-create-update-rp2sp" podStartSLOduration=2.7796015020000002 podStartE2EDuration="2.779601502s" podCreationTimestamp="2026-02-03 12:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:16.747911155 +0000 UTC m=+1129.222807243" watchObservedRunningTime="2026-02-03 12:24:16.779601502 +0000 UTC m=+1129.254497590" Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.851082 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-j8w92" podStartSLOduration=2.8510567079999998 podStartE2EDuration="2.851056708s" podCreationTimestamp="2026-02-03 12:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:16.845868742 +0000 UTC m=+1129.320764820" watchObservedRunningTime="2026-02-03 12:24:16.851056708 +0000 UTC m=+1129.325952796" Feb 03 12:24:16 crc kubenswrapper[4679]: I0203 12:24:16.892060 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0375-account-create-update-v7bhg" podStartSLOduration=2.892027807 podStartE2EDuration="2.892027807s" podCreationTimestamp="2026-02-03 12:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:16.890268681 +0000 UTC m=+1129.365164769" watchObservedRunningTime="2026-02-03 12:24:16.892027807 +0000 UTC m=+1129.366923905" Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.754769 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6efb-account-create-update-ldrrh" event={"ID":"ab63a515-b3d6-4b33-a9ff-1ed746139a03","Type":"ContainerStarted","Data":"49ecdbdbc68d12a4fe1abdfe5c744be5f2b6caf6ca44eedcb3cf3c5d262d3af6"} Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.762770 4679 generic.go:334] "Generic (PLEG): container finished" podID="08a99d9a-c1b3-43a0-afb7-592a21d29b18" containerID="8e3d725f726e589955f04fc1bb8da4ef219ab6a39bf22d24d02fe582d9948de0" exitCode=0 Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.762858 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j8w92" event={"ID":"08a99d9a-c1b3-43a0-afb7-592a21d29b18","Type":"ContainerDied","Data":"8e3d725f726e589955f04fc1bb8da4ef219ab6a39bf22d24d02fe582d9948de0"} Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.766563 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerStarted","Data":"c2475583026b0d5df976df90e7af917ab4bd9859134ac49ae0a929c8e8f4937e"} Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.791613 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6efb-account-create-update-ldrrh" podStartSLOduration=2.791583284 podStartE2EDuration="2.791583284s" podCreationTimestamp="2026-02-03 12:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:17.783829822 +0000 UTC m=+1130.258725920" watchObservedRunningTime="2026-02-03 12:24:17.791583284 +0000 UTC m=+1130.266479372" Feb 03 12:24:17 crc kubenswrapper[4679]: I0203 12:24:17.814781 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" podStartSLOduration=4.8147652690000005 podStartE2EDuration="4.814765269s" podCreationTimestamp="2026-02-03 12:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:17.812327426 +0000 UTC m=+1130.287223524" watchObservedRunningTime="2026-02-03 12:24:17.814765269 +0000 UTC m=+1130.289661367" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.287611 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.404979 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts\") pod \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.405206 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2vnh\" (UniqueName: \"kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh\") pod \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\" (UID: \"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2\") " Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.406290 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" (UID: "9a7b8990-8fe4-4e3c-bdf2-41d9039afed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.415518 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh" (OuterVolumeSpecName: "kube-api-access-c2vnh") pod "9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" (UID: "9a7b8990-8fe4-4e3c-bdf2-41d9039afed2"). InnerVolumeSpecName "kube-api-access-c2vnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.422431 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2vnh\" (UniqueName: \"kubernetes.io/projected/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-kube-api-access-c2vnh\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.422478 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.485098 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.625836 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njtqh\" (UniqueName: \"kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh\") pod \"2b2420e5-7ea0-4879-9e16-1721fc087527\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.625954 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts\") pod \"2b2420e5-7ea0-4879-9e16-1721fc087527\" (UID: \"2b2420e5-7ea0-4879-9e16-1721fc087527\") " Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.626584 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b2420e5-7ea0-4879-9e16-1721fc087527" (UID: "2b2420e5-7ea0-4879-9e16-1721fc087527"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.630082 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh" (OuterVolumeSpecName: "kube-api-access-njtqh") pod "2b2420e5-7ea0-4879-9e16-1721fc087527" (UID: "2b2420e5-7ea0-4879-9e16-1721fc087527"). InnerVolumeSpecName "kube-api-access-njtqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.729076 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njtqh\" (UniqueName: \"kubernetes.io/projected/2b2420e5-7ea0-4879-9e16-1721fc087527-kube-api-access-njtqh\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.729142 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b2420e5-7ea0-4879-9e16-1721fc087527-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.777996 4679 generic.go:334] "Generic (PLEG): container finished" podID="a83970ea-96e1-479c-ac08-e26d41f50ed2" containerID="f5c65c2d45ce008de19161c14559fea430fb3c95e6e13c3cb88bb20c4d1abf1c" exitCode=0 Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.778078 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ba4-account-create-update-rp2sp" event={"ID":"a83970ea-96e1-479c-ac08-e26d41f50ed2","Type":"ContainerDied","Data":"f5c65c2d45ce008de19161c14559fea430fb3c95e6e13c3cb88bb20c4d1abf1c"} Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.781190 4679 generic.go:334] "Generic (PLEG): container finished" podID="34b5e1e2-7d27-460b-920e-4d131c25b9ff" containerID="d210313605fa27b3fc5b64d7354bd1c949fb1a236473054705851e247eb3af44" exitCode=0 Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.781263 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0375-account-create-update-v7bhg" event={"ID":"34b5e1e2-7d27-460b-920e-4d131c25b9ff","Type":"ContainerDied","Data":"d210313605fa27b3fc5b64d7354bd1c949fb1a236473054705851e247eb3af44"} Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.796540 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fhzpb" event={"ID":"2b2420e5-7ea0-4879-9e16-1721fc087527","Type":"ContainerDied","Data":"d08062b90f7c5af4117dfdb2f1558b24372d033950d539bce76c1b6692e82907"} Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.796622 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08062b90f7c5af4117dfdb2f1558b24372d033950d539bce76c1b6692e82907" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.796658 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fhzpb" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.799881 4679 generic.go:334] "Generic (PLEG): container finished" podID="ab63a515-b3d6-4b33-a9ff-1ed746139a03" containerID="49ecdbdbc68d12a4fe1abdfe5c744be5f2b6caf6ca44eedcb3cf3c5d262d3af6" exitCode=0 Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.799947 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6efb-account-create-update-ldrrh" event={"ID":"ab63a515-b3d6-4b33-a9ff-1ed746139a03","Type":"ContainerDied","Data":"49ecdbdbc68d12a4fe1abdfe5c744be5f2b6caf6ca44eedcb3cf3c5d262d3af6"} Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.809353 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q8nng" event={"ID":"9a7b8990-8fe4-4e3c-bdf2-41d9039afed2","Type":"ContainerDied","Data":"6f5ea66ae0d1eb9e975f460758461c6c63c06e2e27dc40f4eeddd45aa2980de9"} Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.809604 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5ea66ae0d1eb9e975f460758461c6c63c06e2e27dc40f4eeddd45aa2980de9" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.809637 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:18 crc kubenswrapper[4679]: I0203 12:24:18.809789 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q8nng" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.063259 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.238929 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd8j6\" (UniqueName: \"kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6\") pod \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.239070 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts\") pod \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\" (UID: \"08a99d9a-c1b3-43a0-afb7-592a21d29b18\") " Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.240190 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08a99d9a-c1b3-43a0-afb7-592a21d29b18" (UID: "08a99d9a-c1b3-43a0-afb7-592a21d29b18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.247841 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6" (OuterVolumeSpecName: "kube-api-access-cd8j6") pod "08a99d9a-c1b3-43a0-afb7-592a21d29b18" (UID: "08a99d9a-c1b3-43a0-afb7-592a21d29b18"). InnerVolumeSpecName "kube-api-access-cd8j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.341534 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08a99d9a-c1b3-43a0-afb7-592a21d29b18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.341586 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd8j6\" (UniqueName: \"kubernetes.io/projected/08a99d9a-c1b3-43a0-afb7-592a21d29b18-kube-api-access-cd8j6\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.822453 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-j8w92" Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.826525 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-j8w92" event={"ID":"08a99d9a-c1b3-43a0-afb7-592a21d29b18","Type":"ContainerDied","Data":"213934be7dc2bb194971d076eb0e0c37aaed0ec20d9a17f2edaea93b3794d482"} Feb 03 12:24:19 crc kubenswrapper[4679]: I0203 12:24:19.826625 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213934be7dc2bb194971d076eb0e0c37aaed0ec20d9a17f2edaea93b3794d482" Feb 03 12:24:21 crc kubenswrapper[4679]: I0203 12:24:21.843830 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d9b136b-ec91-4486-af62-ec1f49e4e010" containerID="702dfef749bd9bf0497f8de1ebe892d3ea127ea521fedcc0bc24f24ba77bf6e9" exitCode=0 Feb 03 12:24:21 crc kubenswrapper[4679]: I0203 12:24:21.844291 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-flh2f" event={"ID":"2d9b136b-ec91-4486-af62-ec1f49e4e010","Type":"ContainerDied","Data":"702dfef749bd9bf0497f8de1ebe892d3ea127ea521fedcc0bc24f24ba77bf6e9"} Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.506550 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.512839 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.527899 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzpj5\" (UniqueName: \"kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5\") pod \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.528412 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts\") pod \"a83970ea-96e1-479c-ac08-e26d41f50ed2\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.528526 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zz69\" (UniqueName: \"kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69\") pod \"a83970ea-96e1-479c-ac08-e26d41f50ed2\" (UID: \"a83970ea-96e1-479c-ac08-e26d41f50ed2\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.528755 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts\") pod \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\" (UID: \"ab63a515-b3d6-4b33-a9ff-1ed746139a03\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.531212 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a83970ea-96e1-479c-ac08-e26d41f50ed2" (UID: "a83970ea-96e1-479c-ac08-e26d41f50ed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.536195 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab63a515-b3d6-4b33-a9ff-1ed746139a03" (UID: "ab63a515-b3d6-4b33-a9ff-1ed746139a03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.536546 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5" (OuterVolumeSpecName: "kube-api-access-vzpj5") pod "ab63a515-b3d6-4b33-a9ff-1ed746139a03" (UID: "ab63a515-b3d6-4b33-a9ff-1ed746139a03"). InnerVolumeSpecName "kube-api-access-vzpj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.537048 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69" (OuterVolumeSpecName: "kube-api-access-7zz69") pod "a83970ea-96e1-479c-ac08-e26d41f50ed2" (UID: "a83970ea-96e1-479c-ac08-e26d41f50ed2"). InnerVolumeSpecName "kube-api-access-7zz69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.594871 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.630446 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwrgg\" (UniqueName: \"kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg\") pod \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.630668 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts\") pod \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\" (UID: \"34b5e1e2-7d27-460b-920e-4d131c25b9ff\") " Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.631057 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83970ea-96e1-479c-ac08-e26d41f50ed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.631109 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zz69\" (UniqueName: \"kubernetes.io/projected/a83970ea-96e1-479c-ac08-e26d41f50ed2-kube-api-access-7zz69\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.631123 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab63a515-b3d6-4b33-a9ff-1ed746139a03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.631136 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzpj5\" (UniqueName: \"kubernetes.io/projected/ab63a515-b3d6-4b33-a9ff-1ed746139a03-kube-api-access-vzpj5\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.631769 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b5e1e2-7d27-460b-920e-4d131c25b9ff" (UID: "34b5e1e2-7d27-460b-920e-4d131c25b9ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.638297 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg" (OuterVolumeSpecName: "kube-api-access-hwrgg") pod "34b5e1e2-7d27-460b-920e-4d131c25b9ff" (UID: "34b5e1e2-7d27-460b-920e-4d131c25b9ff"). InnerVolumeSpecName "kube-api-access-hwrgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.739515 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b5e1e2-7d27-460b-920e-4d131c25b9ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.739560 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwrgg\" (UniqueName: \"kubernetes.io/projected/34b5e1e2-7d27-460b-920e-4d131c25b9ff-kube-api-access-hwrgg\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.855265 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ba4-account-create-update-rp2sp" event={"ID":"a83970ea-96e1-479c-ac08-e26d41f50ed2","Type":"ContainerDied","Data":"040e7ebe41fbed7b5942b8b580ec63b5cb889c8512067bf7c265e06db5e51ff3"} Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.855306 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ba4-account-create-update-rp2sp" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.855339 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="040e7ebe41fbed7b5942b8b580ec63b5cb889c8512067bf7c265e06db5e51ff3" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.858089 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q8h2z" event={"ID":"b4982065-33b7-4840-8c29-2e4507cfe43d","Type":"ContainerStarted","Data":"2425f3bfa5f0fd45afdd47d687d4748f219aca10d2741a9272211f069aab10f8"} Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.859902 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0375-account-create-update-v7bhg" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.859927 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0375-account-create-update-v7bhg" event={"ID":"34b5e1e2-7d27-460b-920e-4d131c25b9ff","Type":"ContainerDied","Data":"26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342"} Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.859964 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a9d41eb3850aa102abeef9bf7e2444aa2ba73121d3815c54914e313e7a5342" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.861918 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6efb-account-create-update-ldrrh" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.862003 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6efb-account-create-update-ldrrh" event={"ID":"ab63a515-b3d6-4b33-a9ff-1ed746139a03","Type":"ContainerDied","Data":"331e9c53dbfd224f879ae02673c235dce433667a014aff2c8d2f158aea9f45fb"} Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.862054 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="331e9c53dbfd224f879ae02673c235dce433667a014aff2c8d2f158aea9f45fb" Feb 03 12:24:22 crc kubenswrapper[4679]: I0203 12:24:22.890274 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-q8h2z" podStartSLOduration=1.763121692 podStartE2EDuration="7.890192593s" podCreationTimestamp="2026-02-03 12:24:15 +0000 UTC" firstStartedPulling="2026-02-03 12:24:16.372712199 +0000 UTC m=+1128.847608297" lastFinishedPulling="2026-02-03 12:24:22.49978311 +0000 UTC m=+1134.974679198" observedRunningTime="2026-02-03 12:24:22.876396723 +0000 UTC m=+1135.351292821" watchObservedRunningTime="2026-02-03 12:24:22.890192593 +0000 UTC m=+1135.365088681" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.239377 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-flh2f" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.351383 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle\") pod \"2d9b136b-ec91-4486-af62-ec1f49e4e010\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.351491 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data\") pod \"2d9b136b-ec91-4486-af62-ec1f49e4e010\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.351561 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xqnj\" (UniqueName: \"kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj\") pod \"2d9b136b-ec91-4486-af62-ec1f49e4e010\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.351624 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data\") pod \"2d9b136b-ec91-4486-af62-ec1f49e4e010\" (UID: \"2d9b136b-ec91-4486-af62-ec1f49e4e010\") " Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.359388 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d9b136b-ec91-4486-af62-ec1f49e4e010" (UID: "2d9b136b-ec91-4486-af62-ec1f49e4e010"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.359804 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj" (OuterVolumeSpecName: "kube-api-access-2xqnj") pod "2d9b136b-ec91-4486-af62-ec1f49e4e010" (UID: "2d9b136b-ec91-4486-af62-ec1f49e4e010"). InnerVolumeSpecName "kube-api-access-2xqnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.381113 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d9b136b-ec91-4486-af62-ec1f49e4e010" (UID: "2d9b136b-ec91-4486-af62-ec1f49e4e010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.399621 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data" (OuterVolumeSpecName: "config-data") pod "2d9b136b-ec91-4486-af62-ec1f49e4e010" (UID: "2d9b136b-ec91-4486-af62-ec1f49e4e010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.454655 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.454727 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.454753 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xqnj\" (UniqueName: \"kubernetes.io/projected/2d9b136b-ec91-4486-af62-ec1f49e4e010-kube-api-access-2xqnj\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.454782 4679 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9b136b-ec91-4486-af62-ec1f49e4e010-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.876994 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-flh2f" event={"ID":"2d9b136b-ec91-4486-af62-ec1f49e4e010","Type":"ContainerDied","Data":"b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6"} Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.879100 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70a7c9467d5ea80c6e08b1ce3840534f44123995d904ba2d8fc64afc5b567f6" Feb 03 12:24:23 crc kubenswrapper[4679]: I0203 12:24:23.877068 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-flh2f" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.265872 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.422991 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.423338 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-djvr2" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" containerID="cri-o://b03a134b9174ae8a341b7e9bf99664aed0a2f91f4473b4eab1406020385aa505" gracePeriod=10 Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.548657 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549217 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a99d9a-c1b3-43a0-afb7-592a21d29b18" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549247 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a99d9a-c1b3-43a0-afb7-592a21d29b18" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549265 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83970ea-96e1-479c-ac08-e26d41f50ed2" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549274 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83970ea-96e1-479c-ac08-e26d41f50ed2" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549282 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9b136b-ec91-4486-af62-ec1f49e4e010" containerName="glance-db-sync" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549316 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9b136b-ec91-4486-af62-ec1f49e4e010" containerName="glance-db-sync" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549348 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2420e5-7ea0-4879-9e16-1721fc087527" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549372 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2420e5-7ea0-4879-9e16-1721fc087527" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549392 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549398 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549420 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b5e1e2-7d27-460b-920e-4d131c25b9ff" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549426 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b5e1e2-7d27-460b-920e-4d131c25b9ff" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: E0203 12:24:24.549435 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab63a515-b3d6-4b33-a9ff-1ed746139a03" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549442 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab63a515-b3d6-4b33-a9ff-1ed746139a03" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549653 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a99d9a-c1b3-43a0-afb7-592a21d29b18" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549663 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83970ea-96e1-479c-ac08-e26d41f50ed2" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549672 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab63a515-b3d6-4b33-a9ff-1ed746139a03" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549687 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b5e1e2-7d27-460b-920e-4d131c25b9ff" containerName="mariadb-account-create-update" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549697 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549709 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9b136b-ec91-4486-af62-ec1f49e4e010" containerName="glance-db-sync" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.549719 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2420e5-7ea0-4879-9e16-1721fc087527" containerName="mariadb-database-create" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.550934 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.571039 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-djvr2" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.589922 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.590043 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.590068 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.590174 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.590206 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4trg\" (UniqueName: \"kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.590249 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.593501 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.695025 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.695097 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.695906 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.695978 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4trg\" (UniqueName: \"kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.696017 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.696069 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.696194 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.697145 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.697784 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.697987 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.698385 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.768254 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4trg\" (UniqueName: \"kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg\") pod \"dnsmasq-dns-74f6bcbc87-h8xnp\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.898031 4679 generic.go:334] "Generic (PLEG): container finished" podID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerID="b03a134b9174ae8a341b7e9bf99664aed0a2f91f4473b4eab1406020385aa505" exitCode=0 Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.898098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-djvr2" event={"ID":"f98e14ea-27e5-471b-900d-39c0cc2d676f","Type":"ContainerDied","Data":"b03a134b9174ae8a341b7e9bf99664aed0a2f91f4473b4eab1406020385aa505"} Feb 03 12:24:24 crc kubenswrapper[4679]: I0203 12:24:24.936386 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:25 crc kubenswrapper[4679]: W0203 12:24:25.493001 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66282a53_26c3_41e5_ac68_d9cba1c12335.slice/crio-4f6204871970502499b2591591333f9d780dd45dac5d6024ac463cde68978f75 WatchSource:0}: Error finding container 4f6204871970502499b2591591333f9d780dd45dac5d6024ac463cde68978f75: Status 404 returned error can't find the container with id 4f6204871970502499b2591591333f9d780dd45dac5d6024ac463cde68978f75 Feb 03 12:24:25 crc kubenswrapper[4679]: I0203 12:24:25.494184 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:25 crc kubenswrapper[4679]: I0203 12:24:25.910221 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" event={"ID":"66282a53-26c3-41e5-ac68-d9cba1c12335","Type":"ContainerStarted","Data":"4f6204871970502499b2591591333f9d780dd45dac5d6024ac463cde68978f75"} Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.597241 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.696004 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnmz4\" (UniqueName: \"kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4\") pod \"f98e14ea-27e5-471b-900d-39c0cc2d676f\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.696099 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb\") pod \"f98e14ea-27e5-471b-900d-39c0cc2d676f\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.696189 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc\") pod \"f98e14ea-27e5-471b-900d-39c0cc2d676f\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.696231 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config\") pod \"f98e14ea-27e5-471b-900d-39c0cc2d676f\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.696254 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb\") pod \"f98e14ea-27e5-471b-900d-39c0cc2d676f\" (UID: \"f98e14ea-27e5-471b-900d-39c0cc2d676f\") " Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.709086 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4" (OuterVolumeSpecName: "kube-api-access-tnmz4") pod "f98e14ea-27e5-471b-900d-39c0cc2d676f" (UID: "f98e14ea-27e5-471b-900d-39c0cc2d676f"). InnerVolumeSpecName "kube-api-access-tnmz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.750269 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f98e14ea-27e5-471b-900d-39c0cc2d676f" (UID: "f98e14ea-27e5-471b-900d-39c0cc2d676f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.754001 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f98e14ea-27e5-471b-900d-39c0cc2d676f" (UID: "f98e14ea-27e5-471b-900d-39c0cc2d676f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.761907 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f98e14ea-27e5-471b-900d-39c0cc2d676f" (UID: "f98e14ea-27e5-471b-900d-39c0cc2d676f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.778759 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config" (OuterVolumeSpecName: "config") pod "f98e14ea-27e5-471b-900d-39c0cc2d676f" (UID: "f98e14ea-27e5-471b-900d-39c0cc2d676f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.797768 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.797801 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.797814 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.797827 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnmz4\" (UniqueName: \"kubernetes.io/projected/f98e14ea-27e5-471b-900d-39c0cc2d676f-kube-api-access-tnmz4\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.797837 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f98e14ea-27e5-471b-900d-39c0cc2d676f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.956625 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-djvr2" event={"ID":"f98e14ea-27e5-471b-900d-39c0cc2d676f","Type":"ContainerDied","Data":"b5d33bdfbab4ccba042a2601b5472c7ee82ac6b31f2b17d4694ee143456a52d3"} Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.956704 4679 scope.go:117] "RemoveContainer" containerID="b03a134b9174ae8a341b7e9bf99664aed0a2f91f4473b4eab1406020385aa505" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.957142 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-djvr2" Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.959515 4679 generic.go:334] "Generic (PLEG): container finished" podID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerID="b00825c469ea19d985c620c1ffc698a41ddf34571b314809c0042d92f598a536" exitCode=0 Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.959563 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" event={"ID":"66282a53-26c3-41e5-ac68-d9cba1c12335","Type":"ContainerDied","Data":"b00825c469ea19d985c620c1ffc698a41ddf34571b314809c0042d92f598a536"} Feb 03 12:24:29 crc kubenswrapper[4679]: I0203 12:24:29.989241 4679 scope.go:117] "RemoveContainer" containerID="247583b6e500f4fb9c97f116eb3aa570779c3a44ddff5806b769142113f29c33" Feb 03 12:24:30 crc kubenswrapper[4679]: I0203 12:24:30.012022 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:24:30 crc kubenswrapper[4679]: I0203 12:24:30.019414 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-djvr2"] Feb 03 12:24:30 crc kubenswrapper[4679]: I0203 12:24:30.226920 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" path="/var/lib/kubelet/pods/f98e14ea-27e5-471b-900d-39c0cc2d676f/volumes" Feb 03 12:24:31 crc kubenswrapper[4679]: I0203 12:24:31.981530 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" event={"ID":"66282a53-26c3-41e5-ac68-d9cba1c12335","Type":"ContainerStarted","Data":"2f993dc379112bafdd1b5eb91342a63754398253290605bb976e0f039011a6cb"} Feb 03 12:24:31 crc kubenswrapper[4679]: I0203 12:24:31.982501 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:32 crc kubenswrapper[4679]: I0203 12:24:32.010214 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" podStartSLOduration=8.010175035 podStartE2EDuration="8.010175035s" podCreationTimestamp="2026-02-03 12:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:32.001254392 +0000 UTC m=+1144.476150480" watchObservedRunningTime="2026-02-03 12:24:32.010175035 +0000 UTC m=+1144.485071123" Feb 03 12:24:34 crc kubenswrapper[4679]: I0203 12:24:34.570525 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-djvr2" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 03 12:24:37 crc kubenswrapper[4679]: I0203 12:24:37.030415 4679 generic.go:334] "Generic (PLEG): container finished" podID="b4982065-33b7-4840-8c29-2e4507cfe43d" containerID="2425f3bfa5f0fd45afdd47d687d4748f219aca10d2741a9272211f069aab10f8" exitCode=0 Feb 03 12:24:37 crc kubenswrapper[4679]: I0203 12:24:37.030533 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q8h2z" event={"ID":"b4982065-33b7-4840-8c29-2e4507cfe43d","Type":"ContainerDied","Data":"2425f3bfa5f0fd45afdd47d687d4748f219aca10d2741a9272211f069aab10f8"} Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.385264 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.487619 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6t6k\" (UniqueName: \"kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k\") pod \"b4982065-33b7-4840-8c29-2e4507cfe43d\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.487841 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle\") pod \"b4982065-33b7-4840-8c29-2e4507cfe43d\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.487959 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data\") pod \"b4982065-33b7-4840-8c29-2e4507cfe43d\" (UID: \"b4982065-33b7-4840-8c29-2e4507cfe43d\") " Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.493585 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k" (OuterVolumeSpecName: "kube-api-access-s6t6k") pod "b4982065-33b7-4840-8c29-2e4507cfe43d" (UID: "b4982065-33b7-4840-8c29-2e4507cfe43d"). InnerVolumeSpecName "kube-api-access-s6t6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.511856 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4982065-33b7-4840-8c29-2e4507cfe43d" (UID: "b4982065-33b7-4840-8c29-2e4507cfe43d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.528880 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data" (OuterVolumeSpecName: "config-data") pod "b4982065-33b7-4840-8c29-2e4507cfe43d" (UID: "b4982065-33b7-4840-8c29-2e4507cfe43d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.589971 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.590004 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4982065-33b7-4840-8c29-2e4507cfe43d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:38 crc kubenswrapper[4679]: I0203 12:24:38.590014 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6t6k\" (UniqueName: \"kubernetes.io/projected/b4982065-33b7-4840-8c29-2e4507cfe43d-kube-api-access-s6t6k\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.077067 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q8h2z" event={"ID":"b4982065-33b7-4840-8c29-2e4507cfe43d","Type":"ContainerDied","Data":"a7a0dc33dc7e0e4f8d14c9e26e2a64c6c39fe00542e57afca2ef256e6bf48260"} Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.077439 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a0dc33dc7e0e4f8d14c9e26e2a64c6c39fe00542e57afca2ef256e6bf48260" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.077170 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q8h2z" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.321556 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.321827 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="dnsmasq-dns" containerID="cri-o://2f993dc379112bafdd1b5eb91342a63754398253290605bb976e0f039011a6cb" gracePeriod=10 Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.322491 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.365900 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:39 crc kubenswrapper[4679]: E0203 12:24:39.366303 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.366321 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" Feb 03 12:24:39 crc kubenswrapper[4679]: E0203 12:24:39.366345 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4982065-33b7-4840-8c29-2e4507cfe43d" containerName="keystone-db-sync" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.366353 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4982065-33b7-4840-8c29-2e4507cfe43d" containerName="keystone-db-sync" Feb 03 12:24:39 crc kubenswrapper[4679]: E0203 12:24:39.366398 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="init" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.366407 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="init" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.366598 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4982065-33b7-4840-8c29-2e4507cfe43d" containerName="keystone-db-sync" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.366619 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98e14ea-27e5-471b-900d-39c0cc2d676f" containerName="dnsmasq-dns" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.367675 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.396050 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9z7tv"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.399659 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.405302 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.405485 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hk2x5" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.405606 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.405675 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.408961 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.420419 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.447043 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9z7tv"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508390 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508460 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508527 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508655 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508676 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508702 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvvl\" (UniqueName: \"kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508772 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508792 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508814 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508858 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508881 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.508935 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pht\" (UniqueName: \"kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.552827 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.566546 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.571951 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.573182 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.583839 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xj962" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.585912 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613350 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5pht\" (UniqueName: \"kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613436 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613473 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613504 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613553 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613581 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613626 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613650 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613677 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcvvl\" (UniqueName: \"kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613728 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613755 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65z6c\" (UniqueName: \"kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613779 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613805 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613830 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613876 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613906 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.613935 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.615430 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.617217 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.622864 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.626509 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.633228 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.633838 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.638065 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.638269 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.650882 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.650900 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.660350 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.670153 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcvvl\" (UniqueName: \"kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl\") pod \"dnsmasq-dns-847c4cc679-z9qsx\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.689371 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.699495 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.711663 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.715118 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5pht\" (UniqueName: \"kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht\") pod \"keystone-bootstrap-9z7tv\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.718871 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.719073 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.723991 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.724114 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.724150 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65z6c\" (UniqueName: \"kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.724234 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.725996 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.727464 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.728520 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.736015 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7n6pw"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.738780 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.739659 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.744875 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nbb4v" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.745954 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.746239 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.747070 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.773839 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.784931 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65z6c\" (UniqueName: \"kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c\") pod \"horizon-84b5f78fb9-n9b9g\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.816194 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.819028 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7n6pw"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836323 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836379 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836400 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836417 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836464 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vld\" (UniqueName: \"kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836492 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.836520 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.885724 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-m9g7v"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.886792 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.896653 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.896989 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.899574 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fl2ch" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.916099 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m9g7v"] Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.924884 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940083 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940156 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940204 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940242 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940302 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940384 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxs6s\" (UniqueName: \"kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940430 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940452 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940474 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940498 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940519 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940553 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rw28\" (UniqueName: \"kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940577 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940602 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940644 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.940672 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vld\" (UniqueName: \"kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.941528 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.944926 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.945618 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.948821 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.971005 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.981254 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.982033 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.999302 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vld\" (UniqueName: \"kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld\") pod \"ceilometer-0\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " pod="openstack/ceilometer-0" Feb 03 12:24:39 crc kubenswrapper[4679]: I0203 12:24:39.999420 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vdgvw"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.000920 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.014331 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.014566 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9bhxw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.043863 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxs6s\" (UniqueName: \"kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044387 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044425 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7km\" (UniqueName: \"kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044452 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rw28\" (UniqueName: \"kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044471 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044490 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044529 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044572 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044602 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044620 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044706 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.044739 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.046082 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vdgvw"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.049069 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.057556 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.059107 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.062219 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.073500 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.075021 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.077177 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.080808 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.082315 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.087018 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxs6s\" (UniqueName: \"kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s\") pod \"cinder-db-sync-7n6pw\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.087991 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rw28\" (UniqueName: \"kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28\") pod \"neutron-db-sync-m9g7v\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.097120 4679 generic.go:334] "Generic (PLEG): container finished" podID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerID="2f993dc379112bafdd1b5eb91342a63754398253290605bb976e0f039011a6cb" exitCode=0 Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.097163 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" event={"ID":"66282a53-26c3-41e5-ac68-d9cba1c12335","Type":"ContainerDied","Data":"2f993dc379112bafdd1b5eb91342a63754398253290605bb976e0f039011a6cb"} Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.120439 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.161352 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.161824 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.162019 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7km\" (UniqueName: \"kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.162381 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.163135 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.164017 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.165888 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.166143 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnb4p\" (UniqueName: \"kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.171261 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.174787 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.185415 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.186885 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7km\" (UniqueName: \"kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km\") pod \"barbican-db-sync-vdgvw\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.194724 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.195503 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qg7br"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.206457 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.214478 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.214721 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.214872 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xtjfn" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.258415 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qg7br"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.258457 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.259816 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.259901 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.267942 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.269564 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271330 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271424 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271443 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271502 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271523 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjrp\" (UniqueName: \"kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271558 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271578 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271625 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271702 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnb4p\" (UniqueName: \"kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.271741 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.273013 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.276030 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.276379 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.277703 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fhx4w" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.279307 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.277743 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.283126 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.285585 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.287878 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.288857 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.315209 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnb4p\" (UniqueName: \"kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p\") pod \"horizon-59779d96f5-vtczj\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.335092 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.351045 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.376235 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.376437 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.376472 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.376536 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.376600 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.378471 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.378667 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.378707 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.378843 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381101 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381160 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjrp\" (UniqueName: \"kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381323 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381391 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrf4\" (UniqueName: \"kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381429 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381471 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381545 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381680 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381704 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vqp\" (UniqueName: \"kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381766 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.381805 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.384660 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.387954 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.388771 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.405446 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjrp\" (UniqueName: \"kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp\") pod \"placement-db-sync-qg7br\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.457048 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484147 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484208 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6vqp\" (UniqueName: \"kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484265 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484297 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484418 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484450 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484476 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484503 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484551 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484612 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrf4\" (UniqueName: \"kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484641 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484673 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.484710 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.485239 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.486305 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.489869 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.490476 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.490745 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.491050 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.491331 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.493591 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.508446 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.512161 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6vqp\" (UniqueName: \"kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.515745 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.534862 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.535553 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qg7br" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.536508 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrf4\" (UniqueName: \"kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4\") pod \"dnsmasq-dns-785d8bcb8c-p8xhk\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.539967 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.620180 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.659411 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.687998 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9z7tv"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.720096 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.771711 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.775282 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.778856 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.782584 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.787052 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:24:40 crc kubenswrapper[4679]: W0203 12:24:40.788486 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaef2f29_b49a_4f88_b88f_b8e5581e1033.slice/crio-ca18b37629310923c56852e7ff0240bdf32730b7b7adf9b05b09411af17cd9b4 WatchSource:0}: Error finding container ca18b37629310923c56852e7ff0240bdf32730b7b7adf9b05b09411af17cd9b4: Status 404 returned error can't find the container with id ca18b37629310923c56852e7ff0240bdf32730b7b7adf9b05b09411af17cd9b4 Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.860321 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.910400 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.911497 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.912451 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.912668 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.912856 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.912990 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h659\" (UniqueName: \"kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.913151 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.913906 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.914079 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:40 crc kubenswrapper[4679]: I0203 12:24:40.923944 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.016089 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.016802 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.016859 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.016917 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.017176 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.017528 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4trg\" (UniqueName: \"kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg\") pod \"66282a53-26c3-41e5-ac68-d9cba1c12335\" (UID: \"66282a53-26c3-41e5-ac68-d9cba1c12335\") " Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018219 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018289 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018489 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018789 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018823 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h659\" (UniqueName: \"kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.018943 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.019028 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.019125 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.019495 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.020632 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.021194 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.027218 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.030157 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.030500 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.031978 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg" (OuterVolumeSpecName: "kube-api-access-p4trg") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "kube-api-access-p4trg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.040081 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h659\" (UniqueName: \"kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.044890 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.085589 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.094125 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.125130 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4trg\" (UniqueName: \"kubernetes.io/projected/66282a53-26c3-41e5-ac68-d9cba1c12335-kube-api-access-p4trg\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.130904 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" event={"ID":"caef2f29-b49a-4f88-b88f-b8e5581e1033","Type":"ContainerStarted","Data":"ca18b37629310923c56852e7ff0240bdf32730b7b7adf9b05b09411af17cd9b4"} Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.135458 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" event={"ID":"66282a53-26c3-41e5-ac68-d9cba1c12335","Type":"ContainerDied","Data":"4f6204871970502499b2591591333f9d780dd45dac5d6024ac463cde68978f75"} Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.135486 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-h8xnp" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.135520 4679 scope.go:117] "RemoveContainer" containerID="2f993dc379112bafdd1b5eb91342a63754398253290605bb976e0f039011a6cb" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.139195 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9z7tv" event={"ID":"fb306c6b-2b1c-49af-864c-7fb9bee2b26a","Type":"ContainerStarted","Data":"4acd489c2b586fcabcf03930a0ecd1a445022609791970f617b45c0ca87ebdda"} Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.149678 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerStarted","Data":"910d49890c1c821a24dc0776f520f5cff08df005b6e7cf22a7deea502e33e331"} Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.168026 4679 scope.go:117] "RemoveContainer" containerID="b00825c469ea19d985c620c1ffc698a41ddf34571b314809c0042d92f598a536" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.175906 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.181456 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.183972 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config" (OuterVolumeSpecName: "config") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.194539 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.205091 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66282a53-26c3-41e5-ac68-d9cba1c12335" (UID: "66282a53-26c3-41e5-ac68-d9cba1c12335"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.211011 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.227285 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.227307 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.227316 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.227329 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:41 crc kubenswrapper[4679]: I0203 12:24:41.227339 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66282a53-26c3-41e5-ac68-d9cba1c12335-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.606539 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m9g7v"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.672157 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.734095 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7n6pw"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.757219 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.784308 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vdgvw"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.822790 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.854474 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.890204 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.922202 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-h8xnp"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.946921 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:41.972549 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qg7br"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.028672 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:24:42 crc kubenswrapper[4679]: E0203 12:24:42.029139 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="dnsmasq-dns" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.029157 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="dnsmasq-dns" Feb 03 12:24:42 crc kubenswrapper[4679]: E0203 12:24:42.029199 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="init" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.029208 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="init" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.031032 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" containerName="dnsmasq-dns" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.032058 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.045680 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.147974 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv92b\" (UniqueName: \"kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.148060 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.148094 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.148141 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.148173 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.212179 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59779d96f5-vtczj" event={"ID":"d849b72a-474e-46bf-826b-4157a351cf12","Type":"ContainerStarted","Data":"5d06e2133c96266a526690ca3ed380b30a7423b12cdbae6b4b59c851c6587fe6"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.232612 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66282a53-26c3-41e5-ac68-d9cba1c12335" path="/var/lib/kubelet/pods/66282a53-26c3-41e5-ac68-d9cba1c12335/volumes" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.233664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerStarted","Data":"70ef0ac06a9432360f47e13a91b6536fbe2b345abca45776129e0f0fbe1e84b3"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.233691 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" event={"ID":"b340ddef-8a7b-459d-af05-97756d80e7eb","Type":"ContainerStarted","Data":"c384b06cb6f5594a9b4599ffc99f73631aa7ddd8e531a6b4b55418a6b36c6f92"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.251381 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv92b\" (UniqueName: \"kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.251440 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.251465 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.251501 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.251525 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.252510 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.253014 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.253499 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.253706 4679 generic.go:334] "Generic (PLEG): container finished" podID="caef2f29-b49a-4f88-b88f-b8e5581e1033" containerID="22247fbf1895e2172cb19cb500db93b6b52490f359a04eea47a80e001cc35373" exitCode=0 Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.253806 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" event={"ID":"caef2f29-b49a-4f88-b88f-b8e5581e1033","Type":"ContainerDied","Data":"22247fbf1895e2172cb19cb500db93b6b52490f359a04eea47a80e001cc35373"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.263300 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.265651 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m9g7v" event={"ID":"cc594779-2b21-4b8c-8fc6-a2f51273089d","Type":"ContainerStarted","Data":"938900c002d5f1560c5e03e3b530fe271f4b4709ce38e89fd7bf6c2611d33fa2"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.275819 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7n6pw" event={"ID":"bc9ca558-ad13-4599-80e5-05be55c84a55","Type":"ContainerStarted","Data":"49fcaaca2753fd2597225726d76eba6f1efc8bec4c50db05ced84b26552ada11"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.281955 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.290992 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv92b\" (UniqueName: \"kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b\") pod \"horizon-7d48b85f7-tlq6z\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.308680 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vdgvw" event={"ID":"c0e98bf9-342d-44dc-9742-1a732178eebd","Type":"ContainerStarted","Data":"67a3203baea9b8e73d52468c580347e31bb83137edf54bfb27f14ebb98e1998a"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.313995 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9z7tv" event={"ID":"fb306c6b-2b1c-49af-864c-7fb9bee2b26a","Type":"ContainerStarted","Data":"749a751a89e0ab62f219c86114e5872241def892a351f8505e45038cf3e6db70"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.334620 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qg7br" event={"ID":"1de98726-0c88-46ae-9df5-fd6d031233f4","Type":"ContainerStarted","Data":"31912d1938e76960f18f8c61919ce6746d67ba05f1ea32f6b24952e8054c5cbe"} Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.349207 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9z7tv" podStartSLOduration=3.349187934 podStartE2EDuration="3.349187934s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:42.346213246 +0000 UTC m=+1154.821109324" watchObservedRunningTime="2026-02-03 12:24:42.349187934 +0000 UTC m=+1154.824084012" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.371048 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:24:42 crc kubenswrapper[4679]: I0203 12:24:42.894946 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.258720 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.280118 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.288785 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:43 crc kubenswrapper[4679]: W0203 12:24:43.332559 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8911344_9b22_448e_a8cf_35a83accf3d3.slice/crio-6152f68d44aac07de210544c260751b0c8e67ce1556d5409924c91548a819275 WatchSource:0}: Error finding container 6152f68d44aac07de210544c260751b0c8e67ce1556d5409924c91548a819275: Status 404 returned error can't find the container with id 6152f68d44aac07de210544c260751b0c8e67ce1556d5409924c91548a819275 Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.367532 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerStarted","Data":"6152f68d44aac07de210544c260751b0c8e67ce1556d5409924c91548a819275"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.370433 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d48b85f7-tlq6z" event={"ID":"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139","Type":"ContainerStarted","Data":"ef1e67386d13ea86371a2af5f3c5120ecbdcdcbe731c1ccb17525d932f79dd8f"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.375393 4679 generic.go:334] "Generic (PLEG): container finished" podID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerID="4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a" exitCode=0 Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.375474 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" event={"ID":"b340ddef-8a7b-459d-af05-97756d80e7eb","Type":"ContainerDied","Data":"4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.380081 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m9g7v" event={"ID":"cc594779-2b21-4b8c-8fc6-a2f51273089d","Type":"ContainerStarted","Data":"fdc5a95638ddc022b29a636131da9b5cf00ef384551a808b1bb943d5da051f33"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.382755 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerStarted","Data":"7662f2d8ddf5e00c5928e6b9c93d3ac8dd481050f24aabcca9cfeda7cb34e89d"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.388779 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.388816 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-z9qsx" event={"ID":"caef2f29-b49a-4f88-b88f-b8e5581e1033","Type":"ContainerDied","Data":"ca18b37629310923c56852e7ff0240bdf32730b7b7adf9b05b09411af17cd9b4"} Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.389038 4679 scope.go:117] "RemoveContainer" containerID="22247fbf1895e2172cb19cb500db93b6b52490f359a04eea47a80e001cc35373" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429662 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429795 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429818 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429847 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429874 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcvvl\" (UniqueName: \"kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.429959 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0\") pod \"caef2f29-b49a-4f88-b88f-b8e5581e1033\" (UID: \"caef2f29-b49a-4f88-b88f-b8e5581e1033\") " Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.443291 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-m9g7v" podStartSLOduration=4.443259299 podStartE2EDuration="4.443259299s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:43.433123445 +0000 UTC m=+1155.908019533" watchObservedRunningTime="2026-02-03 12:24:43.443259299 +0000 UTC m=+1155.918155387" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.465560 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl" (OuterVolumeSpecName: "kube-api-access-rcvvl") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "kube-api-access-rcvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.519483 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.542724 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.543029 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcvvl\" (UniqueName: \"kubernetes.io/projected/caef2f29-b49a-4f88-b88f-b8e5581e1033-kube-api-access-rcvvl\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.583990 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.589318 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config" (OuterVolumeSpecName: "config") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.591668 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.599727 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "caef2f29-b49a-4f88-b88f-b8e5581e1033" (UID: "caef2f29-b49a-4f88-b88f-b8e5581e1033"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.645397 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.645449 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.645462 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.645476 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/caef2f29-b49a-4f88-b88f-b8e5581e1033-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.832795 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:43 crc kubenswrapper[4679]: I0203 12:24:43.847610 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-z9qsx"] Feb 03 12:24:44 crc kubenswrapper[4679]: I0203 12:24:44.226502 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caef2f29-b49a-4f88-b88f-b8e5581e1033" path="/var/lib/kubelet/pods/caef2f29-b49a-4f88-b88f-b8e5581e1033/volumes" Feb 03 12:24:44 crc kubenswrapper[4679]: I0203 12:24:44.520741 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" event={"ID":"b340ddef-8a7b-459d-af05-97756d80e7eb","Type":"ContainerStarted","Data":"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba"} Feb 03 12:24:44 crc kubenswrapper[4679]: I0203 12:24:44.521498 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:44 crc kubenswrapper[4679]: I0203 12:24:44.547473 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerStarted","Data":"f60ca477a1302c89cdd5ae3f46df67fe991a96f62f56b6f2527dfbd51bf37200"} Feb 03 12:24:44 crc kubenswrapper[4679]: I0203 12:24:44.586598 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" podStartSLOduration=4.58656961 podStartE2EDuration="4.58656961s" podCreationTimestamp="2026-02-03 12:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:44.571133617 +0000 UTC m=+1157.046029735" watchObservedRunningTime="2026-02-03 12:24:44.58656961 +0000 UTC m=+1157.061465698" Feb 03 12:24:45 crc kubenswrapper[4679]: I0203 12:24:45.591716 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerStarted","Data":"5e8aa42fae2149c08c58b84be918dc30a61ab229869bc8d839d4e1729d70f59e"} Feb 03 12:24:45 crc kubenswrapper[4679]: I0203 12:24:45.591886 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-log" containerID="cri-o://f60ca477a1302c89cdd5ae3f46df67fe991a96f62f56b6f2527dfbd51bf37200" gracePeriod=30 Feb 03 12:24:45 crc kubenswrapper[4679]: I0203 12:24:45.591912 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-httpd" containerID="cri-o://5e8aa42fae2149c08c58b84be918dc30a61ab229869bc8d839d4e1729d70f59e" gracePeriod=30 Feb 03 12:24:45 crc kubenswrapper[4679]: I0203 12:24:45.596098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerStarted","Data":"184805bce6a4b5b4f3eade6e558e1f2b5f2696db744dbb081b9d7618c74590de"} Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.622682 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerStarted","Data":"34ac2eedfaf5189e7ee75ebcab3b4ffc1af7817314276d32d1cea2fb6dd37ed6"} Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.622816 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-log" containerID="cri-o://184805bce6a4b5b4f3eade6e558e1f2b5f2696db744dbb081b9d7618c74590de" gracePeriod=30 Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.623952 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-httpd" containerID="cri-o://34ac2eedfaf5189e7ee75ebcab3b4ffc1af7817314276d32d1cea2fb6dd37ed6" gracePeriod=30 Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.629097 4679 generic.go:334] "Generic (PLEG): container finished" podID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerID="5e8aa42fae2149c08c58b84be918dc30a61ab229869bc8d839d4e1729d70f59e" exitCode=0 Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.629130 4679 generic.go:334] "Generic (PLEG): container finished" podID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerID="f60ca477a1302c89cdd5ae3f46df67fe991a96f62f56b6f2527dfbd51bf37200" exitCode=143 Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.629151 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerDied","Data":"5e8aa42fae2149c08c58b84be918dc30a61ab229869bc8d839d4e1729d70f59e"} Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.630697 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerDied","Data":"f60ca477a1302c89cdd5ae3f46df67fe991a96f62f56b6f2527dfbd51bf37200"} Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.654958 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.6549406730000005 podStartE2EDuration="7.654940673s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:45.625860404 +0000 UTC m=+1158.100756492" watchObservedRunningTime="2026-02-03 12:24:46.654940673 +0000 UTC m=+1159.129836761" Feb 03 12:24:46 crc kubenswrapper[4679]: I0203 12:24:46.663331 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.663290871 podStartE2EDuration="7.663290871s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:24:46.656225236 +0000 UTC m=+1159.131121314" watchObservedRunningTime="2026-02-03 12:24:46.663290871 +0000 UTC m=+1159.138186959" Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.645731 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerID="34ac2eedfaf5189e7ee75ebcab3b4ffc1af7817314276d32d1cea2fb6dd37ed6" exitCode=0 Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.645801 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerDied","Data":"34ac2eedfaf5189e7ee75ebcab3b4ffc1af7817314276d32d1cea2fb6dd37ed6"} Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.645872 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerDied","Data":"184805bce6a4b5b4f3eade6e558e1f2b5f2696db744dbb081b9d7618c74590de"} Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.645828 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerID="184805bce6a4b5b4f3eade6e558e1f2b5f2696db744dbb081b9d7618c74590de" exitCode=143 Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.649540 4679 generic.go:334] "Generic (PLEG): container finished" podID="fb306c6b-2b1c-49af-864c-7fb9bee2b26a" containerID="749a751a89e0ab62f219c86114e5872241def892a351f8505e45038cf3e6db70" exitCode=0 Feb 03 12:24:47 crc kubenswrapper[4679]: I0203 12:24:47.649592 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9z7tv" event={"ID":"fb306c6b-2b1c-49af-864c-7fb9bee2b26a","Type":"ContainerDied","Data":"749a751a89e0ab62f219c86114e5872241def892a351f8505e45038cf3e6db70"} Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.140169 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.190773 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:24:48 crc kubenswrapper[4679]: E0203 12:24:48.191214 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caef2f29-b49a-4f88-b88f-b8e5581e1033" containerName="init" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.191237 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="caef2f29-b49a-4f88-b88f-b8e5581e1033" containerName="init" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.191484 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="caef2f29-b49a-4f88-b88f-b8e5581e1033" containerName="init" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.192668 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.195739 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.250792 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.285432 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.335162 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74557bdb5d-lsfq8"] Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.337869 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.366155 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74557bdb5d-lsfq8"] Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.382912 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-secret-key\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.382963 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.382995 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-tls-certs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383015 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383031 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cft\" (UniqueName: \"kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383047 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wj49\" (UniqueName: \"kubernetes.io/projected/a09ad5f1-6af1-452d-a08f-271579ecb3d1-kube-api-access-6wj49\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383063 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-scripts\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383082 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-combined-ca-bundle\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383103 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09ad5f1-6af1-452d-a08f-271579ecb3d1-logs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383173 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383222 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383247 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383275 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-config-data\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.383298 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.484921 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.484991 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485022 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-config-data\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485058 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485083 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-secret-key\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485115 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485150 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-tls-certs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485172 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485193 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cft\" (UniqueName: \"kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485220 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wj49\" (UniqueName: \"kubernetes.io/projected/a09ad5f1-6af1-452d-a08f-271579ecb3d1-kube-api-access-6wj49\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485250 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-scripts\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485276 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-combined-ca-bundle\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485303 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09ad5f1-6af1-452d-a08f-271579ecb3d1-logs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.485422 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.486623 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.487140 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.487178 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.487790 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-scripts\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.488288 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a09ad5f1-6af1-452d-a08f-271579ecb3d1-logs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.488827 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a09ad5f1-6af1-452d-a08f-271579ecb3d1-config-data\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.493511 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-secret-key\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.496886 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-horizon-tls-certs\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.497161 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.499048 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.500258 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09ad5f1-6af1-452d-a08f-271579ecb3d1-combined-ca-bundle\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.511639 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wj49\" (UniqueName: \"kubernetes.io/projected/a09ad5f1-6af1-452d-a08f-271579ecb3d1-kube-api-access-6wj49\") pod \"horizon-74557bdb5d-lsfq8\" (UID: \"a09ad5f1-6af1-452d-a08f-271579ecb3d1\") " pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.513446 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cft\" (UniqueName: \"kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.512279 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs\") pod \"horizon-755ddc4dc6-5tjzs\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.535688 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:24:48 crc kubenswrapper[4679]: I0203 12:24:48.661734 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:24:50 crc kubenswrapper[4679]: I0203 12:24:50.661575 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:24:50 crc kubenswrapper[4679]: I0203 12:24:50.745759 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:24:50 crc kubenswrapper[4679]: I0203 12:24:50.746019 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" containerID="cri-o://c2475583026b0d5df976df90e7af917ab4bd9859134ac49ae0a929c8e8f4937e" gracePeriod=10 Feb 03 12:24:51 crc kubenswrapper[4679]: I0203 12:24:51.700822 4679 generic.go:334] "Generic (PLEG): container finished" podID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerID="c2475583026b0d5df976df90e7af917ab4bd9859134ac49ae0a929c8e8f4937e" exitCode=0 Feb 03 12:24:51 crc kubenswrapper[4679]: I0203 12:24:51.700897 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerDied","Data":"c2475583026b0d5df976df90e7af917ab4bd9859134ac49ae0a929c8e8f4937e"} Feb 03 12:24:54 crc kubenswrapper[4679]: I0203 12:24:54.268211 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Feb 03 12:24:57 crc kubenswrapper[4679]: E0203 12:24:57.539827 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 12:24:57 crc kubenswrapper[4679]: E0203 12:24:57.540509 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twjrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qg7br_openstack(1de98726-0c88-46ae-9df5-fd6d031233f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:24:57 crc kubenswrapper[4679]: E0203 12:24:57.541720 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qg7br" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.666495 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.772241 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.772210 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5522947c-53a9-493d-95c8-b10c4ab834ea","Type":"ContainerDied","Data":"7662f2d8ddf5e00c5928e6b9c93d3ac8dd481050f24aabcca9cfeda7cb34e89d"} Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.772455 4679 scope.go:117] "RemoveContainer" containerID="5e8aa42fae2149c08c58b84be918dc30a61ab229869bc8d839d4e1729d70f59e" Feb 03 12:24:57 crc kubenswrapper[4679]: E0203 12:24:57.774266 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-qg7br" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.821845 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.821980 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822067 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822140 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822213 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6vqp\" (UniqueName: \"kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822354 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822444 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.822502 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle\") pod \"5522947c-53a9-493d-95c8-b10c4ab834ea\" (UID: \"5522947c-53a9-493d-95c8-b10c4ab834ea\") " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.823294 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs" (OuterVolumeSpecName: "logs") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.823340 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.830389 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.837721 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts" (OuterVolumeSpecName: "scripts") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.845096 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp" (OuterVolumeSpecName: "kube-api-access-l6vqp") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "kube-api-access-l6vqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.858915 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.883616 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.890147 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data" (OuterVolumeSpecName: "config-data") pod "5522947c-53a9-493d-95c8-b10c4ab834ea" (UID: "5522947c-53a9-493d-95c8-b10c4ab834ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925018 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6vqp\" (UniqueName: \"kubernetes.io/projected/5522947c-53a9-493d-95c8-b10c4ab834ea-kube-api-access-l6vqp\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925056 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925067 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5522947c-53a9-493d-95c8-b10c4ab834ea-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925080 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925099 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925109 4679 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925121 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.925132 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5522947c-53a9-493d-95c8-b10c4ab834ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:57 crc kubenswrapper[4679]: I0203 12:24:57.946576 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.026482 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.109618 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.117741 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.125157 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.125452 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55h5dch589h647hb6hcfh5fdh64ch679hb6h5dh678h669h5dbh8ch678h68fh6h645h67ch568h585hch57dh575h584h5d6hf9h5d8h58fhbch699q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2vld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2d70a61e-3ae9-4111-9a4d-6bc363fb09db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.138734 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.161915 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.162471 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-httpd" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162488 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-httpd" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.162507 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb306c6b-2b1c-49af-864c-7fb9bee2b26a" containerName="keystone-bootstrap" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162516 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb306c6b-2b1c-49af-864c-7fb9bee2b26a" containerName="keystone-bootstrap" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.162548 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-log" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162556 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-log" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162788 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-log" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162813 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb306c6b-2b1c-49af-864c-7fb9bee2b26a" containerName="keystone-bootstrap" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.162836 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" containerName="glance-httpd" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.164209 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.167865 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.168061 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n575h74h589h56hc6hdchffh8h56ch669h67h64ch99h697h577h76hb7h597h79h65ch595h59ch5f5h685h559hdbhfh5f6h544h5c8hdchb5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnb4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59779d96f5-vtczj_openstack(d849b72a-474e-46bf-826b-4157a351cf12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:24:58 crc kubenswrapper[4679]: E0203 12:24:58.170492 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59779d96f5-vtczj" podUID="d849b72a-474e-46bf-826b-4157a351cf12" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.173177 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.175196 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.198302 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.229997 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.230511 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.230577 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.230762 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.230813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.235466 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts" (OuterVolumeSpecName: "scripts") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.241173 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.242531 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5pht\" (UniqueName: \"kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht\") pod \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\" (UID: \"fb306c6b-2b1c-49af-864c-7fb9bee2b26a\") " Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.242690 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.243181 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.243301 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58p76\" (UniqueName: \"kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.243459 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.243628 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.245305 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5522947c-53a9-493d-95c8-b10c4ab834ea" path="/var/lib/kubelet/pods/5522947c-53a9-493d-95c8-b10c4ab834ea/volumes" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.246693 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247483 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247607 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247770 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247851 4679 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247870 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.247879 4679 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.253107 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht" (OuterVolumeSpecName: "kube-api-access-w5pht") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "kube-api-access-w5pht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.276599 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data" (OuterVolumeSpecName: "config-data") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.285223 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb306c6b-2b1c-49af-864c-7fb9bee2b26a" (UID: "fb306c6b-2b1c-49af-864c-7fb9bee2b26a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350194 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350272 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350348 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350421 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350496 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58p76\" (UniqueName: \"kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350547 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350615 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350642 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350713 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350724 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5pht\" (UniqueName: \"kubernetes.io/projected/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-kube-api-access-w5pht\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.350737 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb306c6b-2b1c-49af-864c-7fb9bee2b26a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.351139 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.352328 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.353929 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.354345 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.358580 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.360654 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.361765 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.380974 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58p76\" (UniqueName: \"kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.399663 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.517133 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.787578 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9z7tv" event={"ID":"fb306c6b-2b1c-49af-864c-7fb9bee2b26a","Type":"ContainerDied","Data":"4acd489c2b586fcabcf03930a0ecd1a445022609791970f617b45c0ca87ebdda"} Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.788042 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4acd489c2b586fcabcf03930a0ecd1a445022609791970f617b45c0ca87ebdda" Feb 03 12:24:58 crc kubenswrapper[4679]: I0203 12:24:58.787614 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9z7tv" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.313756 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9z7tv"] Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.323663 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9z7tv"] Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.407220 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rrkcf"] Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.408951 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.414156 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.414500 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hk2x5" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.414937 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.415304 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.415784 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.418427 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rrkcf"] Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579475 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579629 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579693 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579732 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnd8k\" (UniqueName: \"kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579842 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.579995 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.682169 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.682299 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnd8k\" (UniqueName: \"kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.682969 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.683052 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.683149 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.683267 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.688249 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.692850 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.696252 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.703127 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.706215 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.718548 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnd8k\" (UniqueName: \"kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k\") pod \"keystone-bootstrap-rrkcf\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:24:59 crc kubenswrapper[4679]: I0203 12:24:59.731110 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:25:00 crc kubenswrapper[4679]: I0203 12:25:00.227604 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb306c6b-2b1c-49af-864c-7fb9bee2b26a" path="/var/lib/kubelet/pods/fb306c6b-2b1c-49af-864c-7fb9bee2b26a/volumes" Feb 03 12:25:00 crc kubenswrapper[4679]: E0203 12:25:00.528202 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 12:25:00 crc kubenswrapper[4679]: E0203 12:25:00.528428 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68hbh66ch649h557h5d7h88h56ch5d8h5c4h548h56dh5cbh565h647h4h6ch78h95h58ch7ch75h5h65fh5ffh5ddh656h4h594h66dh545hffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv92b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d48b85f7-tlq6z_openstack(2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:25:00 crc kubenswrapper[4679]: E0203 12:25:00.532457 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d48b85f7-tlq6z" podUID="2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" Feb 03 12:25:04 crc kubenswrapper[4679]: I0203 12:25:04.268675 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Feb 03 12:25:06 crc kubenswrapper[4679]: I0203 12:25:06.736160 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:25:06 crc kubenswrapper[4679]: I0203 12:25:06.736556 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.409143 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.420996 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496318 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496574 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496606 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496641 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496763 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.496875 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497425 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497583 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497656 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhxbm\" (UniqueName: \"kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497715 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497758 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config\") pod \"28066d94-f1ec-4f33-ac4f-052d767c8533\" (UID: \"28066d94-f1ec-4f33-ac4f-052d767c8533\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497810 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497867 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h659\" (UniqueName: \"kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497892 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.497942 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs\") pod \"c8911344-9b22-448e-a8cf-35a83accf3d3\" (UID: \"c8911344-9b22-448e-a8cf-35a83accf3d3\") " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.498921 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.499293 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs" (OuterVolumeSpecName: "logs") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.503848 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts" (OuterVolumeSpecName: "scripts") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.509783 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.512779 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659" (OuterVolumeSpecName: "kube-api-access-6h659") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "kube-api-access-6h659". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.517798 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm" (OuterVolumeSpecName: "kube-api-access-lhxbm") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "kube-api-access-lhxbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.540710 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.557663 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.571397 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.575457 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.586677 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config" (OuterVolumeSpecName: "config") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.590604 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data" (OuterVolumeSpecName: "config-data") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601390 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhxbm\" (UniqueName: \"kubernetes.io/projected/28066d94-f1ec-4f33-ac4f-052d767c8533-kube-api-access-lhxbm\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601432 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601450 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601463 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h659\" (UniqueName: \"kubernetes.io/projected/c8911344-9b22-448e-a8cf-35a83accf3d3-kube-api-access-6h659\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601504 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601516 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8911344-9b22-448e-a8cf-35a83accf3d3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601529 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601540 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601552 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601566 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.601578 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.612841 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28066d94-f1ec-4f33-ac4f-052d767c8533" (UID: "28066d94-f1ec-4f33-ac4f-052d767c8533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.616920 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8911344-9b22-448e-a8cf-35a83accf3d3" (UID: "c8911344-9b22-448e-a8cf-35a83accf3d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.624977 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.703821 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28066d94-f1ec-4f33-ac4f-052d767c8533-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.703884 4679 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8911344-9b22-448e-a8cf-35a83accf3d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.703901 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.899385 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.899408 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8911344-9b22-448e-a8cf-35a83accf3d3","Type":"ContainerDied","Data":"6152f68d44aac07de210544c260751b0c8e67ce1556d5409924c91548a819275"} Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.903759 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" event={"ID":"28066d94-f1ec-4f33-ac4f-052d767c8533","Type":"ContainerDied","Data":"dd8dfdc42eeb68a979eb4db25a038c4fbe0b527c9cdb88815ba66edcb3a5339b"} Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.903965 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.967170 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.982947 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:25:08 crc kubenswrapper[4679]: I0203 12:25:08.991274 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.006476 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-7b47n"] Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.023793 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.024415 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="init" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024431 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="init" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.024452 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-httpd" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024458 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-httpd" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.024486 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024492 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.024509 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-log" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024516 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-log" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024732 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-httpd" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024747 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" containerName="glance-log" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.024761 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.029011 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.034131 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.034392 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.042206 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113288 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113386 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113421 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113475 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113520 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcsrr\" (UniqueName: \"kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113596 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113649 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.113699 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.143748 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.149879 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.194821 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.195056 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr7km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vdgvw_openstack(c0e98bf9-342d-44dc-9742-1a732178eebd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.196545 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vdgvw" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214262 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key\") pod \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214409 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnb4p\" (UniqueName: \"kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p\") pod \"d849b72a-474e-46bf-826b-4157a351cf12\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214466 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data\") pod \"d849b72a-474e-46bf-826b-4157a351cf12\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214523 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts\") pod \"d849b72a-474e-46bf-826b-4157a351cf12\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214581 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs\") pod \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214624 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key\") pod \"d849b72a-474e-46bf-826b-4157a351cf12\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214655 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs\") pod \"d849b72a-474e-46bf-826b-4157a351cf12\" (UID: \"d849b72a-474e-46bf-826b-4157a351cf12\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214770 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts\") pod \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214837 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv92b\" (UniqueName: \"kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b\") pod \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.214923 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data\") pod \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\" (UID: \"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139\") " Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215280 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs" (OuterVolumeSpecName: "logs") pod "d849b72a-474e-46bf-826b-4157a351cf12" (UID: "d849b72a-474e-46bf-826b-4157a351cf12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215522 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215587 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215641 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215681 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts" (OuterVolumeSpecName: "scripts") pod "d849b72a-474e-46bf-826b-4157a351cf12" (UID: "d849b72a-474e-46bf-826b-4157a351cf12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215690 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215786 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215845 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215842 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data" (OuterVolumeSpecName: "config-data") pod "d849b72a-474e-46bf-826b-4157a351cf12" (UID: "d849b72a-474e-46bf-826b-4157a351cf12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215909 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215952 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcsrr\" (UniqueName: \"kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.215994 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.216037 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.216053 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d849b72a-474e-46bf-826b-4157a351cf12-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.216063 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d849b72a-474e-46bf-826b-4157a351cf12-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.216229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs" (OuterVolumeSpecName: "logs") pod "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" (UID: "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.216765 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.217025 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.218855 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts" (OuterVolumeSpecName: "scripts") pod "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" (UID: "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.220827 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data" (OuterVolumeSpecName: "config-data") pod "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" (UID: "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.221007 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p" (OuterVolumeSpecName: "kube-api-access-lnb4p") pod "d849b72a-474e-46bf-826b-4157a351cf12" (UID: "d849b72a-474e-46bf-826b-4157a351cf12"). InnerVolumeSpecName "kube-api-access-lnb4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.222694 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d849b72a-474e-46bf-826b-4157a351cf12" (UID: "d849b72a-474e-46bf-826b-4157a351cf12"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.223375 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.226076 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" (UID: "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.226128 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b" (OuterVolumeSpecName: "kube-api-access-sv92b") pod "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" (UID: "2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139"). InnerVolumeSpecName "kube-api-access-sv92b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.227300 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.236963 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcsrr\" (UniqueName: \"kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.244041 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.260425 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.268237 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.270233 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-7b47n" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318785 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318824 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv92b\" (UniqueName: \"kubernetes.io/projected/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-kube-api-access-sv92b\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318840 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318852 4679 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318865 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnb4p\" (UniqueName: \"kubernetes.io/projected/d849b72a-474e-46bf-826b-4157a351cf12-kube-api-access-lnb4p\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318873 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.318881 4679 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d849b72a-474e-46bf-826b-4157a351cf12-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.356817 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.928066 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59779d96f5-vtczj" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.928043 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59779d96f5-vtczj" event={"ID":"d849b72a-474e-46bf-826b-4157a351cf12","Type":"ContainerDied","Data":"5d06e2133c96266a526690ca3ed380b30a7423b12cdbae6b4b59c851c6587fe6"} Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.929870 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d48b85f7-tlq6z" Feb 03 12:25:09 crc kubenswrapper[4679]: I0203 12:25:09.930969 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d48b85f7-tlq6z" event={"ID":"2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139","Type":"ContainerDied","Data":"ef1e67386d13ea86371a2af5f3c5120ecbdcdcbe731c1ccb17525d932f79dd8f"} Feb 03 12:25:09 crc kubenswrapper[4679]: E0203 12:25:09.932145 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vdgvw" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.019975 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.044240 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d48b85f7-tlq6z"] Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.063647 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.072321 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59779d96f5-vtczj"] Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.227642 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28066d94-f1ec-4f33-ac4f-052d767c8533" path="/var/lib/kubelet/pods/28066d94-f1ec-4f33-ac4f-052d767c8533/volumes" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.228379 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139" path="/var/lib/kubelet/pods/2e3f97f8-46bf-4b9d-b1a9-c1dda38bc139/volumes" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.228940 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8911344-9b22-448e-a8cf-35a83accf3d3" path="/var/lib/kubelet/pods/c8911344-9b22-448e-a8cf-35a83accf3d3/volumes" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.232243 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d849b72a-474e-46bf-826b-4157a351cf12" path="/var/lib/kubelet/pods/d849b72a-474e-46bf-826b-4157a351cf12/volumes" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.320679 4679 scope.go:117] "RemoveContainer" containerID="f60ca477a1302c89cdd5ae3f46df67fe991a96f62f56b6f2527dfbd51bf37200" Feb 03 12:25:10 crc kubenswrapper[4679]: E0203 12:25:10.326941 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 12:25:10 crc kubenswrapper[4679]: E0203 12:25:10.327222 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxs6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7n6pw_openstack(bc9ca558-ad13-4599-80e5-05be55c84a55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:25:10 crc kubenswrapper[4679]: E0203 12:25:10.328745 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7n6pw" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.727548 4679 scope.go:117] "RemoveContainer" containerID="34ac2eedfaf5189e7ee75ebcab3b4ffc1af7817314276d32d1cea2fb6dd37ed6" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.902765 4679 scope.go:117] "RemoveContainer" containerID="184805bce6a4b5b4f3eade6e558e1f2b5f2696db744dbb081b9d7618c74590de" Feb 03 12:25:10 crc kubenswrapper[4679]: E0203 12:25:10.969504 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7n6pw" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" Feb 03 12:25:10 crc kubenswrapper[4679]: I0203 12:25:10.989983 4679 scope.go:117] "RemoveContainer" containerID="c2475583026b0d5df976df90e7af917ab4bd9859134ac49ae0a929c8e8f4937e" Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.028876 4679 scope.go:117] "RemoveContainer" containerID="0829f58a6a4b8d5e41ea72bd676d2b6f1963f50fbd5fe231692243e816714e8a" Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.149946 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74557bdb5d-lsfq8"] Feb 03 12:25:11 crc kubenswrapper[4679]: W0203 12:25:11.321484 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c53de0_396a_4234_969c_65e4c2227710.slice/crio-79a264c94f2f251f1abc48d4ff5066694877bd39be0a421377256bc938b793f5 WatchSource:0}: Error finding container 79a264c94f2f251f1abc48d4ff5066694877bd39be0a421377256bc938b793f5: Status 404 returned error can't find the container with id 79a264c94f2f251f1abc48d4ff5066694877bd39be0a421377256bc938b793f5 Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.322772 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.337095 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rrkcf"] Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.578145 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:25:11 crc kubenswrapper[4679]: W0203 12:25:11.586582 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d1ab2d_b565_4322_b193_3143ec9b5919.slice/crio-ed473b7574fc7aa2e64f78c7d3578c3498695e60742a060a3a8a8c6abb5f134d WatchSource:0}: Error finding container ed473b7574fc7aa2e64f78c7d3578c3498695e60742a060a3a8a8c6abb5f134d: Status 404 returned error can't find the container with id ed473b7574fc7aa2e64f78c7d3578c3498695e60742a060a3a8a8c6abb5f134d Feb 03 12:25:11 crc kubenswrapper[4679]: I0203 12:25:11.998786 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerStarted","Data":"ed473b7574fc7aa2e64f78c7d3578c3498695e60742a060a3a8a8c6abb5f134d"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.003052 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerStarted","Data":"07fae2ab84183685cc0f50aec61e2eeea49ee326b970f684e650741fc52f68d0"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.003293 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerStarted","Data":"4184193abacd23b537ac7046ae7bf8fd557a5e807c933fd578d176dc710fa9e4"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.003617 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84b5f78fb9-n9b9g" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon-log" containerID="cri-o://4184193abacd23b537ac7046ae7bf8fd557a5e807c933fd578d176dc710fa9e4" gracePeriod=30 Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.005060 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84b5f78fb9-n9b9g" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon" containerID="cri-o://07fae2ab84183685cc0f50aec61e2eeea49ee326b970f684e650741fc52f68d0" gracePeriod=30 Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.011753 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74557bdb5d-lsfq8" event={"ID":"a09ad5f1-6af1-452d-a08f-271579ecb3d1","Type":"ContainerStarted","Data":"1c96dcc3f4f73b95a90d38df0181af6c76644cad3af776eef15945abe8c165eb"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.011829 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74557bdb5d-lsfq8" event={"ID":"a09ad5f1-6af1-452d-a08f-271579ecb3d1","Type":"ContainerStarted","Data":"7fa937dc1234edf79fd58cb38f061f8e0b930083113c6fd8fe4f17f3ee6c4f86"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.011843 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74557bdb5d-lsfq8" event={"ID":"a09ad5f1-6af1-452d-a08f-271579ecb3d1","Type":"ContainerStarted","Data":"52ee72ee28365e31ba87974937b0d2f20acc5e0efd66b584b5bfa8f001bbd6ff"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.023771 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerStarted","Data":"3d45b94948bb9d6ae9e9251381d37c498a102b8b8646ee693a3ee3f1edcbb7f5"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.024061 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerStarted","Data":"79a264c94f2f251f1abc48d4ff5066694877bd39be0a421377256bc938b793f5"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.036519 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84b5f78fb9-n9b9g" podStartSLOduration=3.732507173 podStartE2EDuration="33.03647214s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="2026-02-03 12:24:40.968278571 +0000 UTC m=+1153.443174659" lastFinishedPulling="2026-02-03 12:25:10.272243538 +0000 UTC m=+1182.747139626" observedRunningTime="2026-02-03 12:25:12.027959429 +0000 UTC m=+1184.502855527" watchObservedRunningTime="2026-02-03 12:25:12.03647214 +0000 UTC m=+1184.511368228" Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.036586 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerStarted","Data":"70fd91c995d23e25c9ff114e07fe254c7c4b38e579e232159768192a05733c5a"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.049073 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rrkcf" event={"ID":"ceafb034-bf62-4347-943f-622426408bb5","Type":"ContainerStarted","Data":"ea8d8a7595e1aa053a5c7d9f2baf2eb80cb6d10026656a3bab0d685819910190"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.049169 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rrkcf" event={"ID":"ceafb034-bf62-4347-943f-622426408bb5","Type":"ContainerStarted","Data":"0b27aa3e87cebe7b79722614ee174e0c0cf56a399aded9ae5c45f7d031717da9"} Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.081066 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74557bdb5d-lsfq8" podStartSLOduration=24.081002109 podStartE2EDuration="24.081002109s" podCreationTimestamp="2026-02-03 12:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:12.051980544 +0000 UTC m=+1184.526876632" watchObservedRunningTime="2026-02-03 12:25:12.081002109 +0000 UTC m=+1184.555898207" Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.094438 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-755ddc4dc6-5tjzs" podStartSLOduration=24.094409679 podStartE2EDuration="24.094409679s" podCreationTimestamp="2026-02-03 12:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:12.085752713 +0000 UTC m=+1184.560648821" watchObservedRunningTime="2026-02-03 12:25:12.094409679 +0000 UTC m=+1184.569305767" Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.121301 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rrkcf" podStartSLOduration=13.121272558 podStartE2EDuration="13.121272558s" podCreationTimestamp="2026-02-03 12:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:12.109128592 +0000 UTC m=+1184.584024680" watchObservedRunningTime="2026-02-03 12:25:12.121272558 +0000 UTC m=+1184.596168646" Feb 03 12:25:12 crc kubenswrapper[4679]: I0203 12:25:12.443207 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:25:13 crc kubenswrapper[4679]: I0203 12:25:13.087351 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerStarted","Data":"8b50418522d858f152f94f7287764ae5b114c63d74924de8cc696f45b2e543fc"} Feb 03 12:25:13 crc kubenswrapper[4679]: I0203 12:25:13.090378 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerStarted","Data":"ad0ac41f82b83a248e3c4f4ccd4ba9a6d6ea1e2f8a23ecf003c5d9f279c54f2e"} Feb 03 12:25:13 crc kubenswrapper[4679]: I0203 12:25:13.095213 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerStarted","Data":"951674e17f8fccdfaa4ab910c7c7efe2a6193020647d195b5e38143d23aab9ad"} Feb 03 12:25:13 crc kubenswrapper[4679]: I0203 12:25:13.097235 4679 generic.go:334] "Generic (PLEG): container finished" podID="cc594779-2b21-4b8c-8fc6-a2f51273089d" containerID="fdc5a95638ddc022b29a636131da9b5cf00ef384551a808b1bb943d5da051f33" exitCode=0 Feb 03 12:25:13 crc kubenswrapper[4679]: I0203 12:25:13.097311 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m9g7v" event={"ID":"cc594779-2b21-4b8c-8fc6-a2f51273089d","Type":"ContainerDied","Data":"fdc5a95638ddc022b29a636131da9b5cf00ef384551a808b1bb943d5da051f33"} Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.120837 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerStarted","Data":"ebfc05804a632e39ca7bc642c84d42d0a82311fb2854c542dce70379f61c5ba3"} Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.128179 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerStarted","Data":"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480"} Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.157592 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.157577073 podStartE2EDuration="16.157577073s" podCreationTimestamp="2026-02-03 12:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:14.154964484 +0000 UTC m=+1186.629860592" watchObservedRunningTime="2026-02-03 12:25:14.157577073 +0000 UTC m=+1186.632473161" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.550381 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.675517 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rw28\" (UniqueName: \"kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28\") pod \"cc594779-2b21-4b8c-8fc6-a2f51273089d\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.676020 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config\") pod \"cc594779-2b21-4b8c-8fc6-a2f51273089d\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.676235 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle\") pod \"cc594779-2b21-4b8c-8fc6-a2f51273089d\" (UID: \"cc594779-2b21-4b8c-8fc6-a2f51273089d\") " Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.699095 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28" (OuterVolumeSpecName: "kube-api-access-9rw28") pod "cc594779-2b21-4b8c-8fc6-a2f51273089d" (UID: "cc594779-2b21-4b8c-8fc6-a2f51273089d"). InnerVolumeSpecName "kube-api-access-9rw28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.726223 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc594779-2b21-4b8c-8fc6-a2f51273089d" (UID: "cc594779-2b21-4b8c-8fc6-a2f51273089d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.738747 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config" (OuterVolumeSpecName: "config") pod "cc594779-2b21-4b8c-8fc6-a2f51273089d" (UID: "cc594779-2b21-4b8c-8fc6-a2f51273089d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.778696 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rw28\" (UniqueName: \"kubernetes.io/projected/cc594779-2b21-4b8c-8fc6-a2f51273089d-kube-api-access-9rw28\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.778743 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:14 crc kubenswrapper[4679]: I0203 12:25:14.778757 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc594779-2b21-4b8c-8fc6-a2f51273089d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.150448 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qg7br" event={"ID":"1de98726-0c88-46ae-9df5-fd6d031233f4","Type":"ContainerStarted","Data":"408c5f73d6a80852df6ff82ec3474d87bca4d6aa3a34678972fbf1adf820ed0a"} Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.155711 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m9g7v" event={"ID":"cc594779-2b21-4b8c-8fc6-a2f51273089d","Type":"ContainerDied","Data":"938900c002d5f1560c5e03e3b530fe271f4b4709ce38e89fd7bf6c2611d33fa2"} Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.155773 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938900c002d5f1560c5e03e3b530fe271f4b4709ce38e89fd7bf6c2611d33fa2" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.155851 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m9g7v" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.162252 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerStarted","Data":"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec"} Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.192350 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.192328203 podStartE2EDuration="7.192328203s" podCreationTimestamp="2026-02-03 12:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:15.185023082 +0000 UTC m=+1187.659919180" watchObservedRunningTime="2026-02-03 12:25:15.192328203 +0000 UTC m=+1187.667224291" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.494098 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:15 crc kubenswrapper[4679]: E0203 12:25:15.495275 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc594779-2b21-4b8c-8fc6-a2f51273089d" containerName="neutron-db-sync" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.495307 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc594779-2b21-4b8c-8fc6-a2f51273089d" containerName="neutron-db-sync" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.495648 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc594779-2b21-4b8c-8fc6-a2f51273089d" containerName="neutron-db-sync" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.497007 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511233 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511327 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511403 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511520 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511549 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sn4v\" (UniqueName: \"kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.511584 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.535182 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.613545 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614179 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614251 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614322 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614350 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sn4v\" (UniqueName: \"kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614395 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.614630 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.615255 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.615311 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.615802 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.616228 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.655248 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sn4v\" (UniqueName: \"kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v\") pod \"dnsmasq-dns-55f844cf75-8kvc8\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.658008 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.686712 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.708535 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fl2ch" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.708815 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.709261 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.709663 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.722790 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.830870 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.830970 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.831018 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.831036 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.831084 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqf2\" (UniqueName: \"kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.856096 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.933729 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.934309 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.934331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.934415 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqf2\" (UniqueName: \"kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.934513 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.951820 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.952433 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.953047 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.953534 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:15 crc kubenswrapper[4679]: I0203 12:25:15.963775 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqf2\" (UniqueName: \"kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2\") pod \"neutron-7b5df755bd-sgmzz\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:16 crc kubenswrapper[4679]: I0203 12:25:16.059088 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:16 crc kubenswrapper[4679]: I0203 12:25:16.194709 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qg7br" podStartSLOduration=5.556936487 podStartE2EDuration="37.194687419s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="2026-02-03 12:24:41.995748016 +0000 UTC m=+1154.470644104" lastFinishedPulling="2026-02-03 12:25:13.633498948 +0000 UTC m=+1186.108395036" observedRunningTime="2026-02-03 12:25:16.189007971 +0000 UTC m=+1188.663904059" watchObservedRunningTime="2026-02-03 12:25:16.194687419 +0000 UTC m=+1188.669583507" Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.182439 4679 generic.go:334] "Generic (PLEG): container finished" podID="ceafb034-bf62-4347-943f-622426408bb5" containerID="ea8d8a7595e1aa053a5c7d9f2baf2eb80cb6d10026656a3bab0d685819910190" exitCode=0 Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.182678 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rrkcf" event={"ID":"ceafb034-bf62-4347-943f-622426408bb5","Type":"ContainerDied","Data":"ea8d8a7595e1aa053a5c7d9f2baf2eb80cb6d10026656a3bab0d685819910190"} Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.917609 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.919545 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.923798 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.929244 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 12:25:17 crc kubenswrapper[4679]: I0203 12:25:17.948033 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097376 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097472 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zsz4\" (UniqueName: \"kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097498 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097534 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097566 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.097763 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.098131 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.199785 4679 generic.go:334] "Generic (PLEG): container finished" podID="1de98726-0c88-46ae-9df5-fd6d031233f4" containerID="408c5f73d6a80852df6ff82ec3474d87bca4d6aa3a34678972fbf1adf820ed0a" exitCode=0 Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.199884 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qg7br" event={"ID":"1de98726-0c88-46ae-9df5-fd6d031233f4","Type":"ContainerDied","Data":"408c5f73d6a80852df6ff82ec3474d87bca4d6aa3a34678972fbf1adf820ed0a"} Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.199964 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zsz4\" (UniqueName: \"kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.200426 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.200535 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.200598 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.200693 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.200978 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.201166 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.224346 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.230026 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.244136 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.244706 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.249888 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.254327 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.258205 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zsz4\" (UniqueName: \"kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4\") pod \"neutron-54f6c577-sh4k7\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.260044 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.522836 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.522902 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.536274 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.537115 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.573625 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.579809 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.662090 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:25:18 crc kubenswrapper[4679]: I0203 12:25:18.662167 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.210139 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.210605 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.357541 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.357622 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.427474 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.461671 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:25:19 crc kubenswrapper[4679]: I0203 12:25:19.926931 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:25:20 crc kubenswrapper[4679]: I0203 12:25:20.234985 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:25:20 crc kubenswrapper[4679]: I0203 12:25:20.235036 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.199528 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qg7br" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.215964 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348382 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle\") pod \"1de98726-0c88-46ae-9df5-fd6d031233f4\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348471 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348523 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348591 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348616 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348660 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348854 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnd8k\" (UniqueName: \"kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k\") pod \"ceafb034-bf62-4347-943f-622426408bb5\" (UID: \"ceafb034-bf62-4347-943f-622426408bb5\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348900 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data\") pod \"1de98726-0c88-46ae-9df5-fd6d031233f4\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348939 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs\") pod \"1de98726-0c88-46ae-9df5-fd6d031233f4\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.348963 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts\") pod \"1de98726-0c88-46ae-9df5-fd6d031233f4\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.349050 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twjrp\" (UniqueName: \"kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp\") pod \"1de98726-0c88-46ae-9df5-fd6d031233f4\" (UID: \"1de98726-0c88-46ae-9df5-fd6d031233f4\") " Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.361336 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rrkcf" event={"ID":"ceafb034-bf62-4347-943f-622426408bb5","Type":"ContainerDied","Data":"0b27aa3e87cebe7b79722614ee174e0c0cf56a399aded9ae5c45f7d031717da9"} Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.361818 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b27aa3e87cebe7b79722614ee174e0c0cf56a399aded9ae5c45f7d031717da9" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.361903 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rrkcf" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.365490 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs" (OuterVolumeSpecName: "logs") pod "1de98726-0c88-46ae-9df5-fd6d031233f4" (UID: "1de98726-0c88-46ae-9df5-fd6d031233f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.376734 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp" (OuterVolumeSpecName: "kube-api-access-twjrp") pod "1de98726-0c88-46ae-9df5-fd6d031233f4" (UID: "1de98726-0c88-46ae-9df5-fd6d031233f4"). InnerVolumeSpecName "kube-api-access-twjrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.387687 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qg7br" event={"ID":"1de98726-0c88-46ae-9df5-fd6d031233f4","Type":"ContainerDied","Data":"31912d1938e76960f18f8c61919ce6746d67ba05f1ea32f6b24952e8054c5cbe"} Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.387978 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31912d1938e76960f18f8c61919ce6746d67ba05f1ea32f6b24952e8054c5cbe" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.387798 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qg7br" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.392549 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts" (OuterVolumeSpecName: "scripts") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.398897 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.399204 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts" (OuterVolumeSpecName: "scripts") pod "1de98726-0c88-46ae-9df5-fd6d031233f4" (UID: "1de98726-0c88-46ae-9df5-fd6d031233f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.408754 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k" (OuterVolumeSpecName: "kube-api-access-pnd8k") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "kube-api-access-pnd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.430660 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.441143 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data" (OuterVolumeSpecName: "config-data") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.449597 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceafb034-bf62-4347-943f-622426408bb5" (UID: "ceafb034-bf62-4347-943f-622426408bb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.457518 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1de98726-0c88-46ae-9df5-fd6d031233f4" (UID: "1de98726-0c88-46ae-9df5-fd6d031233f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463368 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twjrp\" (UniqueName: \"kubernetes.io/projected/1de98726-0c88-46ae-9df5-fd6d031233f4-kube-api-access-twjrp\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463436 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463542 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463555 4679 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463566 4679 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463625 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463669 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceafb034-bf62-4347-943f-622426408bb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463679 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnd8k\" (UniqueName: \"kubernetes.io/projected/ceafb034-bf62-4347-943f-622426408bb5-kube-api-access-pnd8k\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463688 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de98726-0c88-46ae-9df5-fd6d031233f4-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.463696 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.474988 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data" (OuterVolumeSpecName: "config-data") pod "1de98726-0c88-46ae-9df5-fd6d031233f4" (UID: "1de98726-0c88-46ae-9df5-fd6d031233f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.566652 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de98726-0c88-46ae-9df5-fd6d031233f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.855932 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:25:21 crc kubenswrapper[4679]: I0203 12:25:21.898420 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.007246 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.171514 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.172376 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.406390 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b6496f477-9vvrm"] Feb 03 12:25:22 crc kubenswrapper[4679]: E0203 12:25:22.407026 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" containerName="placement-db-sync" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.407049 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" containerName="placement-db-sync" Feb 03 12:25:22 crc kubenswrapper[4679]: E0203 12:25:22.407098 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceafb034-bf62-4347-943f-622426408bb5" containerName="keystone-bootstrap" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.407107 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceafb034-bf62-4347-943f-622426408bb5" containerName="keystone-bootstrap" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.407419 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceafb034-bf62-4347-943f-622426408bb5" containerName="keystone-bootstrap" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.407462 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" containerName="placement-db-sync" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.408431 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.421489 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.421766 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.421869 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.422062 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.422196 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.422346 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hk2x5" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.452668 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" event={"ID":"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e","Type":"ContainerStarted","Data":"9855bb416e3f021b8302afd1795e1ba2b79763f7b6c5f10fc0ffefe1b1c2497b"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.453118 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b6496f477-9vvrm"] Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.490590 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerStarted","Data":"efc568263fb215b4d1ec960708e6d6579b8cfb5b8ca58ac69b2aabfea4b2dc78"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.510815 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.513060 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.523840 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.524014 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.524126 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.524277 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.531507 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerStarted","Data":"fe1e2e8f905eff2b462d27b0abc257ab3e598a35b698cb2cc299faac0441836c"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.546459 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.547172 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xtjfn" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.564634 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerStarted","Data":"c2967118f4e04fe635e234e603378499b10810ba391817010ef28f04e388a11a"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.564792 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerStarted","Data":"3d7ad5f7bea713e7388228d111108ed96571805545b07a0e83eaafcdf8311c5b"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.605046 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.605078 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.605919 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vdgvw" event={"ID":"c0e98bf9-342d-44dc-9742-1a732178eebd","Type":"ContainerStarted","Data":"9ed62df418c77323446252ab9bceb5da54ee0acbb7f65eb97cdee2ebdcbb8ec0"} Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608775 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-public-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608827 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-credential-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608858 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-combined-ca-bundle\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608893 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-scripts\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608950 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd28h\" (UniqueName: \"kubernetes.io/projected/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-kube-api-access-cd28h\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.608995 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-fernet-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.609033 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-config-data\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.609054 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-internal-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.669699 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vdgvw" podStartSLOduration=4.214332446 podStartE2EDuration="43.669665505s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="2026-02-03 12:24:41.767766234 +0000 UTC m=+1154.242662322" lastFinishedPulling="2026-02-03 12:25:21.223099293 +0000 UTC m=+1193.697995381" observedRunningTime="2026-02-03 12:25:22.663959376 +0000 UTC m=+1195.138855464" watchObservedRunningTime="2026-02-03 12:25:22.669665505 +0000 UTC m=+1195.144561603" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.716819 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.716885 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-internal-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.716907 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-config-data\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.716942 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dgc6\" (UniqueName: \"kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717041 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717064 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717085 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-public-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717106 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-credential-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717125 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717152 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-combined-ca-bundle\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717196 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-scripts\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717226 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717323 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd28h\" (UniqueName: \"kubernetes.io/projected/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-kube-api-access-cd28h\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717392 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.717422 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-fernet-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.734189 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-internal-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.736545 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-config-data\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.739159 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-public-tls-certs\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.740190 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-fernet-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.761901 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-credential-keys\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.764909 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-scripts\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.767072 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd28h\" (UniqueName: \"kubernetes.io/projected/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-kube-api-access-cd28h\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.766515 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d40e305-3fdf-4ce8-a586-7f2b9786e0eb-combined-ca-bundle\") pod \"keystone-5b6496f477-9vvrm\" (UID: \"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb\") " pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.817061 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.821910 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822064 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822146 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dgc6\" (UniqueName: \"kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822335 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822417 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822473 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.822556 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.823491 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.828987 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.863121 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.870521 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.872624 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.873131 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dgc6\" (UniqueName: \"kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.883977 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:22 crc kubenswrapper[4679]: I0203 12:25:22.885800 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs\") pod \"placement-6978559f7b-bwf92\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.206387 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.232471 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.643001 4679 generic.go:334] "Generic (PLEG): container finished" podID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerID="3368218516df28db90be66e671fee30820f616eda539a01a0ffb8cfe414952e4" exitCode=0 Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.643763 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" event={"ID":"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e","Type":"ContainerDied","Data":"3368218516df28db90be66e671fee30820f616eda539a01a0ffb8cfe414952e4"} Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.668187 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerStarted","Data":"355b2f28a461053a40a5afe54734aa327658a94f52cd8f0d58e26fa7fd01f890"} Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.668259 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.668275 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerStarted","Data":"65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7"} Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.735186 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.736796 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerStarted","Data":"dc9a41aed40a2ec3da74c68eb6358cac759e6eff74b946a53bc543ef78a242f1"} Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.736842 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.787503 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b6496f477-9vvrm"] Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.815768 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.841437 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b98ff4cb5-lk4d9"] Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.843902 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906140 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-combined-ca-bundle\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906209 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-httpd-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906253 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906319 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7tv\" (UniqueName: \"kubernetes.io/projected/fd629794-5ce3-4d07-9f6c-c0a85424379f-kube-api-access-8r7tv\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906496 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-public-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906543 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-ovndb-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.906568 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-internal-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.929560 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b98ff4cb5-lk4d9"] Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.930548 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b5df755bd-sgmzz" podStartSLOduration=8.930512522 podStartE2EDuration="8.930512522s" podCreationTimestamp="2026-02-03 12:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:23.772506948 +0000 UTC m=+1196.247403036" watchObservedRunningTime="2026-02-03 12:25:23.930512522 +0000 UTC m=+1196.405408610" Feb 03 12:25:23 crc kubenswrapper[4679]: I0203 12:25:23.969511 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54f6c577-sh4k7" podStartSLOduration=6.969482426 podStartE2EDuration="6.969482426s" podCreationTimestamp="2026-02-03 12:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:23.807172661 +0000 UTC m=+1196.282068749" watchObservedRunningTime="2026-02-03 12:25:23.969482426 +0000 UTC m=+1196.444378514" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.000200 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013187 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7tv\" (UniqueName: \"kubernetes.io/projected/fd629794-5ce3-4d07-9f6c-c0a85424379f-kube-api-access-8r7tv\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013259 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-public-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013306 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-ovndb-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013337 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-internal-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013406 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-combined-ca-bundle\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013460 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-httpd-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.013507 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.020394 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-ovndb-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.038176 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-public-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.051200 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-internal-tls-certs\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.053989 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-combined-ca-bundle\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.054478 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-httpd-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.059027 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7tv\" (UniqueName: \"kubernetes.io/projected/fd629794-5ce3-4d07-9f6c-c0a85424379f-kube-api-access-8r7tv\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.062213 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fd629794-5ce3-4d07-9f6c-c0a85424379f-config\") pod \"neutron-7b98ff4cb5-lk4d9\" (UID: \"fd629794-5ce3-4d07-9f6c-c0a85424379f\") " pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.206397 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.786698 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b6496f477-9vvrm" event={"ID":"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb","Type":"ContainerStarted","Data":"ef82e2e7f0c94813c33a3ce012bcf5e2e8ed1f226a1e1c1517665794c8ab0922"} Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.787604 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b6496f477-9vvrm" event={"ID":"0d40e305-3fdf-4ce8-a586-7f2b9786e0eb","Type":"ContainerStarted","Data":"df36b1dc5cf08e3886c06cf335269bed8ee6c64c3d666a67ebd7452a3e20bd28"} Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.787658 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.808492 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/0.log" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.811922 4679 generic.go:334] "Generic (PLEG): container finished" podID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerID="355b2f28a461053a40a5afe54734aa327658a94f52cd8f0d58e26fa7fd01f890" exitCode=1 Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.812843 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerDied","Data":"355b2f28a461053a40a5afe54734aa327658a94f52cd8f0d58e26fa7fd01f890"} Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.812925 4679 scope.go:117] "RemoveContainer" containerID="355b2f28a461053a40a5afe54734aa327658a94f52cd8f0d58e26fa7fd01f890" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.828533 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerStarted","Data":"c40fe8a682144dd623168e0231595817581fab1c58a261b518773121d6eaac2d"} Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.833139 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.835731 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b6496f477-9vvrm" podStartSLOduration=2.835708488 podStartE2EDuration="2.835708488s" podCreationTimestamp="2026-02-03 12:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:24.811802295 +0000 UTC m=+1197.286698403" watchObservedRunningTime="2026-02-03 12:25:24.835708488 +0000 UTC m=+1197.310604576" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.848941 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" event={"ID":"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e","Type":"ContainerStarted","Data":"04476df0de14bbd6a3118da537a3ee9c0ae29b106e9bd50997e2ea626b1af3be"} Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.849329 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:24 crc kubenswrapper[4679]: I0203 12:25:24.989937 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" podStartSLOduration=9.989905692 podStartE2EDuration="9.989905692s" podCreationTimestamp="2026-02-03 12:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:24.937643212 +0000 UTC m=+1197.412539300" watchObservedRunningTime="2026-02-03 12:25:24.989905692 +0000 UTC m=+1197.464801780" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.156323 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b98ff4cb5-lk4d9"] Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.750503 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79464686c6-vwq7l"] Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.753086 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.788501 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79464686c6-vwq7l"] Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.805790 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-combined-ca-bundle\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.805856 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b242f52b-0a15-4493-9da2-15aca091df48-logs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.805889 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnznk\" (UniqueName: \"kubernetes.io/projected/b242f52b-0a15-4493-9da2-15aca091df48-kube-api-access-nnznk\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.805928 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-config-data\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.805964 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-public-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.806018 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-scripts\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.806063 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-internal-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.901529 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/1.log" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.906056 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/0.log" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907687 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-combined-ca-bundle\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907735 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b242f52b-0a15-4493-9da2-15aca091df48-logs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907770 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnznk\" (UniqueName: \"kubernetes.io/projected/b242f52b-0a15-4493-9da2-15aca091df48-kube-api-access-nnznk\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907797 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-config-data\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907822 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-public-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907862 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-scripts\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.907896 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-internal-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.909538 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b242f52b-0a15-4493-9da2-15aca091df48-logs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.911862 4679 generic.go:334] "Generic (PLEG): container finished" podID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerID="8eeffccde05570e59c434f4f3116cbfcd0459f2467683bfd18a1a1623ba14db8" exitCode=1 Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.911993 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerDied","Data":"8eeffccde05570e59c434f4f3116cbfcd0459f2467683bfd18a1a1623ba14db8"} Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.912066 4679 scope.go:117] "RemoveContainer" containerID="355b2f28a461053a40a5afe54734aa327658a94f52cd8f0d58e26fa7fd01f890" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.912405 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b5df755bd-sgmzz" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-api" containerID="cri-o://65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7" gracePeriod=30 Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.916268 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-combined-ca-bundle\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.921201 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-config-data\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.923936 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-internal-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.928335 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnznk\" (UniqueName: \"kubernetes.io/projected/b242f52b-0a15-4493-9da2-15aca091df48-kube-api-access-nnznk\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.929139 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-public-tls-certs\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.931629 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b242f52b-0a15-4493-9da2-15aca091df48-scripts\") pod \"placement-79464686c6-vwq7l\" (UID: \"b242f52b-0a15-4493-9da2-15aca091df48\") " pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.935933 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerStarted","Data":"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1"} Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.936035 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerStarted","Data":"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44"} Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.937469 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.937514 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.949120 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b98ff4cb5-lk4d9" event={"ID":"fd629794-5ce3-4d07-9f6c-c0a85424379f","Type":"ContainerStarted","Data":"b9f4de537df20e07104aa7ddd16f565cddacd1b5b35db4867166bb42a20a3b7e"} Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.949213 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b98ff4cb5-lk4d9" event={"ID":"fd629794-5ce3-4d07-9f6c-c0a85424379f","Type":"ContainerStarted","Data":"869505d4cd2d4c48f984eb2efae13bc177896feaba4f8c7353392656be48b8df"} Feb 03 12:25:25 crc kubenswrapper[4679]: I0203 12:25:25.971324 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6978559f7b-bwf92" podStartSLOduration=3.971291293 podStartE2EDuration="3.971291293s" podCreationTimestamp="2026-02-03 12:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:25.970185474 +0000 UTC m=+1198.445081582" watchObservedRunningTime="2026-02-03 12:25:25.971291293 +0000 UTC m=+1198.446187371" Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.090978 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.738026 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79464686c6-vwq7l"] Feb 03 12:25:26 crc kubenswrapper[4679]: W0203 12:25:26.751280 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb242f52b_0a15_4493_9da2_15aca091df48.slice/crio-b382a5a1909a5e1b15f7964bc9b27bbe9ed0a9c99e7d269256772397a83240f7 WatchSource:0}: Error finding container b382a5a1909a5e1b15f7964bc9b27bbe9ed0a9c99e7d269256772397a83240f7: Status 404 returned error can't find the container with id b382a5a1909a5e1b15f7964bc9b27bbe9ed0a9c99e7d269256772397a83240f7 Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.969166 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79464686c6-vwq7l" event={"ID":"b242f52b-0a15-4493-9da2-15aca091df48","Type":"ContainerStarted","Data":"b382a5a1909a5e1b15f7964bc9b27bbe9ed0a9c99e7d269256772397a83240f7"} Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.984718 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/1.log" Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.996637 4679 generic.go:334] "Generic (PLEG): container finished" podID="c0e98bf9-342d-44dc-9742-1a732178eebd" containerID="9ed62df418c77323446252ab9bceb5da54ee0acbb7f65eb97cdee2ebdcbb8ec0" exitCode=0 Feb 03 12:25:26 crc kubenswrapper[4679]: I0203 12:25:26.996729 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vdgvw" event={"ID":"c0e98bf9-342d-44dc-9742-1a732178eebd","Type":"ContainerDied","Data":"9ed62df418c77323446252ab9bceb5da54ee0acbb7f65eb97cdee2ebdcbb8ec0"} Feb 03 12:25:27 crc kubenswrapper[4679]: I0203 12:25:27.000400 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b98ff4cb5-lk4d9" event={"ID":"fd629794-5ce3-4d07-9f6c-c0a85424379f","Type":"ContainerStarted","Data":"019ecbb3e83c995bba66f4d31276076f0c4e797ccd39f147a25359c820d89dda"} Feb 03 12:25:27 crc kubenswrapper[4679]: I0203 12:25:27.001854 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:27 crc kubenswrapper[4679]: I0203 12:25:27.061426 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b98ff4cb5-lk4d9" podStartSLOduration=4.061400854 podStartE2EDuration="4.061400854s" podCreationTimestamp="2026-02-03 12:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:27.045168632 +0000 UTC m=+1199.520064720" watchObservedRunningTime="2026-02-03 12:25:27.061400854 +0000 UTC m=+1199.536296942" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.017604 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7n6pw" event={"ID":"bc9ca558-ad13-4599-80e5-05be55c84a55","Type":"ContainerStarted","Data":"0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98"} Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.033641 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79464686c6-vwq7l" event={"ID":"b242f52b-0a15-4493-9da2-15aca091df48","Type":"ContainerStarted","Data":"542053f74453f7802ee4c57419ebdf22c2090042a879166c22b0fe0e3b3dc018"} Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.033714 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79464686c6-vwq7l" event={"ID":"b242f52b-0a15-4493-9da2-15aca091df48","Type":"ContainerStarted","Data":"8b2d592e5829be51fea0e380369876fb73416992961eba136ddef69f5709fafd"} Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.034084 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.053172 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7n6pw" podStartSLOduration=4.980738318 podStartE2EDuration="49.053132893s" podCreationTimestamp="2026-02-03 12:24:39 +0000 UTC" firstStartedPulling="2026-02-03 12:24:41.76724973 +0000 UTC m=+1154.242145818" lastFinishedPulling="2026-02-03 12:25:25.839644305 +0000 UTC m=+1198.314540393" observedRunningTime="2026-02-03 12:25:28.040120714 +0000 UTC m=+1200.515016802" watchObservedRunningTime="2026-02-03 12:25:28.053132893 +0000 UTC m=+1200.528028981" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.076372 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79464686c6-vwq7l" podStartSLOduration=3.076319047 podStartE2EDuration="3.076319047s" podCreationTimestamp="2026-02-03 12:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:28.06566744 +0000 UTC m=+1200.540563548" watchObservedRunningTime="2026-02-03 12:25:28.076319047 +0000 UTC m=+1200.551215135" Feb 03 12:25:28 crc kubenswrapper[4679]: E0203 12:25:28.233571 4679 info.go:109] Failed to get network devices: open /sys/class/net/67a3203baea9b8e/address: no such file or directory Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.462281 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.543214 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.596487 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data\") pod \"c0e98bf9-342d-44dc-9742-1a732178eebd\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.596872 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle\") pod \"c0e98bf9-342d-44dc-9742-1a732178eebd\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.597018 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr7km\" (UniqueName: \"kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km\") pod \"c0e98bf9-342d-44dc-9742-1a732178eebd\" (UID: \"c0e98bf9-342d-44dc-9742-1a732178eebd\") " Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.606148 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c0e98bf9-342d-44dc-9742-1a732178eebd" (UID: "c0e98bf9-342d-44dc-9742-1a732178eebd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.611694 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km" (OuterVolumeSpecName: "kube-api-access-xr7km") pod "c0e98bf9-342d-44dc-9742-1a732178eebd" (UID: "c0e98bf9-342d-44dc-9742-1a732178eebd"). InnerVolumeSpecName "kube-api-access-xr7km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.679510 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74557bdb5d-lsfq8" podUID="a09ad5f1-6af1-452d-a08f-271579ecb3d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.695555 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0e98bf9-342d-44dc-9742-1a732178eebd" (UID: "c0e98bf9-342d-44dc-9742-1a732178eebd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.701197 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr7km\" (UniqueName: \"kubernetes.io/projected/c0e98bf9-342d-44dc-9742-1a732178eebd-kube-api-access-xr7km\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.701247 4679 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:28 crc kubenswrapper[4679]: I0203 12:25:28.701258 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e98bf9-342d-44dc-9742-1a732178eebd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.072319 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vdgvw" event={"ID":"c0e98bf9-342d-44dc-9742-1a732178eebd","Type":"ContainerDied","Data":"67a3203baea9b8e73d52468c580347e31bb83137edf54bfb27f14ebb98e1998a"} Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.072389 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a3203baea9b8e73d52468c580347e31bb83137edf54bfb27f14ebb98e1998a" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.072484 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vdgvw" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.073466 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.320883 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-789dd74f99-dtwb4"] Feb 03 12:25:29 crc kubenswrapper[4679]: E0203 12:25:29.321523 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" containerName="barbican-db-sync" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.321547 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" containerName="barbican-db-sync" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.321842 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" containerName="barbican-db-sync" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.323129 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.330544 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.343541 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.346488 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9bhxw" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.369907 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-789dd74f99-dtwb4"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.394661 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7d4655f6d4-rdwtj"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.414271 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.423683 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.426996 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftgw\" (UniqueName: \"kubernetes.io/projected/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-kube-api-access-mftgw\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427146 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-logs\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427219 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0786ef5c-404a-4c24-8188-d757082c1419-logs\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427314 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lbn\" (UniqueName: \"kubernetes.io/projected/0786ef5c-404a-4c24-8188-d757082c1419-kube-api-access-67lbn\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427442 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data-custom\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427493 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data-custom\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427567 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-combined-ca-bundle\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427626 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427662 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.427727 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-combined-ca-bundle\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.428686 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d4655f6d4-rdwtj"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532189 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mftgw\" (UniqueName: \"kubernetes.io/projected/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-kube-api-access-mftgw\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532274 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-logs\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532316 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0786ef5c-404a-4c24-8188-d757082c1419-logs\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532399 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67lbn\" (UniqueName: \"kubernetes.io/projected/0786ef5c-404a-4c24-8188-d757082c1419-kube-api-access-67lbn\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532446 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data-custom\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532465 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data-custom\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532499 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-combined-ca-bundle\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532524 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532540 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532567 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-combined-ca-bundle\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.532864 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-logs\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.533623 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0786ef5c-404a-4c24-8188-d757082c1419-logs\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.539948 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.543736 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data-custom\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.547473 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-combined-ca-bundle\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.550297 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-config-data-custom\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.560310 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-combined-ca-bundle\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.562719 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0786ef5c-404a-4c24-8188-d757082c1419-config-data\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.567044 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67lbn\" (UniqueName: \"kubernetes.io/projected/0786ef5c-404a-4c24-8188-d757082c1419-kube-api-access-67lbn\") pod \"barbican-worker-789dd74f99-dtwb4\" (UID: \"0786ef5c-404a-4c24-8188-d757082c1419\") " pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.597092 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mftgw\" (UniqueName: \"kubernetes.io/projected/91c5b9c5-d4c7-4138-90de-ee51de9f7a5f-kube-api-access-mftgw\") pod \"barbican-keystone-listener-7d4655f6d4-rdwtj\" (UID: \"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f\") " pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.597259 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.597678 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="dnsmasq-dns" containerID="cri-o://04476df0de14bbd6a3118da537a3ee9c0ae29b106e9bd50997e2ea626b1af3be" gracePeriod=10 Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.609316 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.648180 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.667567 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.673451 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.675576 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.699178 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.710270 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-789dd74f99-dtwb4" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.713553 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.734428 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741537 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741585 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741607 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741647 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741673 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741728 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741759 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741782 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4fh\" (UniqueName: \"kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741821 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741840 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcnmj\" (UniqueName: \"kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.741859 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.786050 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851255 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851329 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4fh\" (UniqueName: \"kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851410 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851435 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcnmj\" (UniqueName: \"kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851456 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851475 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851498 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851520 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851562 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851587 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.851647 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.853974 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.854777 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.855494 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.855983 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.862000 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.862604 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.895719 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.899210 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.906115 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4fh\" (UniqueName: \"kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.910181 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle\") pod \"barbican-api-d9bddd974-qkcc6\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:29 crc kubenswrapper[4679]: I0203 12:25:29.915653 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcnmj\" (UniqueName: \"kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj\") pod \"dnsmasq-dns-85ff748b95-7fxst\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.034676 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.057072 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.145157 4679 generic.go:334] "Generic (PLEG): container finished" podID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerID="04476df0de14bbd6a3118da537a3ee9c0ae29b106e9bd50997e2ea626b1af3be" exitCode=0 Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.145815 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" event={"ID":"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e","Type":"ContainerDied","Data":"04476df0de14bbd6a3118da537a3ee9c0ae29b106e9bd50997e2ea626b1af3be"} Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.452044 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-789dd74f99-dtwb4"] Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.517461 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d4655f6d4-rdwtj"] Feb 03 12:25:30 crc kubenswrapper[4679]: I0203 12:25:30.800869 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.089781 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.178176 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" event={"ID":"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f","Type":"ContainerStarted","Data":"556d9c30901d8e2fd6b4ce70eb9a730f5ce0307e4db1d96eb7d42b5eafda9214"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.196392 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" event={"ID":"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e","Type":"ContainerDied","Data":"9855bb416e3f021b8302afd1795e1ba2b79763f7b6c5f10fc0ffefe1b1c2497b"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.196562 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9855bb416e3f021b8302afd1795e1ba2b79763f7b6c5f10fc0ffefe1b1c2497b" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.197262 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.204837 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-789dd74f99-dtwb4" event={"ID":"0786ef5c-404a-4c24-8188-d757082c1419","Type":"ContainerStarted","Data":"f69d8564b7fdfa4771d6a2835681a192aafefd1148a31f800835c56ba4c10af9"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.210664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" event={"ID":"c316e952-31f5-42ef-bd28-f09412dd0118","Type":"ContainerStarted","Data":"34a8d1e271deb757287af88b2e4e83c05df853d1846a03e6ff77bbe6909a0cc2"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211416 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211518 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211643 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211726 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sn4v\" (UniqueName: \"kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211790 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.211808 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb\") pod \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\" (UID: \"6dc62f2a-c70d-4c2c-a10d-f252b7d9692e\") " Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.225747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerStarted","Data":"387b2bf944122d2d71bb49301861367f3d343a4dde74a38f905059a695381e43"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.225852 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerStarted","Data":"4ec5bc1e97206390dcaac8189c86d5cbb9771437120c042b321f69c79cb5c020"} Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.234258 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v" (OuterVolumeSpecName: "kube-api-access-6sn4v") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "kube-api-access-6sn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.314204 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sn4v\" (UniqueName: \"kubernetes.io/projected/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-kube-api-access-6sn4v\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.320218 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config" (OuterVolumeSpecName: "config") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.351324 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.368300 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.375111 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.427216 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.427261 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.427272 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.432826 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.536436 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" (UID: "6dc62f2a-c70d-4c2c-a10d-f252b7d9692e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:31 crc kubenswrapper[4679]: I0203 12:25:31.637646 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.260344 4679 generic.go:334] "Generic (PLEG): container finished" podID="c316e952-31f5-42ef-bd28-f09412dd0118" containerID="55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a" exitCode=0 Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.260587 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" event={"ID":"c316e952-31f5-42ef-bd28-f09412dd0118","Type":"ContainerDied","Data":"55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a"} Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.285193 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.286974 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerStarted","Data":"0a6af023cca3a3cd2012aafa83fe7fb0ed8eb324a4dc0df9134209f0cc4fec72"} Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.287024 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.287052 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.306654 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c6c4bcd4b-hkzh6"] Feb 03 12:25:32 crc kubenswrapper[4679]: E0203 12:25:32.307276 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="init" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.307294 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="init" Feb 03 12:25:32 crc kubenswrapper[4679]: E0203 12:25:32.307329 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="dnsmasq-dns" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.307343 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="dnsmasq-dns" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.307645 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="dnsmasq-dns" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.309067 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.315632 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.315902 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.345026 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c4bcd4b-hkzh6"] Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368058 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-logs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368224 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-internal-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368394 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data-custom\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368560 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-combined-ca-bundle\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368596 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-public-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368743 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvgm\" (UniqueName: \"kubernetes.io/projected/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-kube-api-access-jlvgm\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.368870 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.393861 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.413875 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8kvc8"] Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.419937 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d9bddd974-qkcc6" podStartSLOduration=3.419917762 podStartE2EDuration="3.419917762s" podCreationTimestamp="2026-02-03 12:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:32.362832476 +0000 UTC m=+1204.837728564" watchObservedRunningTime="2026-02-03 12:25:32.419917762 +0000 UTC m=+1204.894813850" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471571 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-logs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471660 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-internal-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471715 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data-custom\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471756 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-combined-ca-bundle\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471775 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-public-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471813 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvgm\" (UniqueName: \"kubernetes.io/projected/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-kube-api-access-jlvgm\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.471843 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.473865 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-logs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.479880 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-public-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.480477 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data-custom\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.493318 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-internal-tls-certs\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.498574 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-combined-ca-bundle\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.499417 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvgm\" (UniqueName: \"kubernetes.io/projected/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-kube-api-access-jlvgm\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.505714 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa77b26-ca52-4ef9-a1c2-68237a080e1b-config-data\") pod \"barbican-api-7c6c4bcd4b-hkzh6\" (UID: \"2aa77b26-ca52-4ef9-a1c2-68237a080e1b\") " pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:32 crc kubenswrapper[4679]: I0203 12:25:32.654635 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:34 crc kubenswrapper[4679]: I0203 12:25:34.227167 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" path="/var/lib/kubelet/pods/6dc62f2a-c70d-4c2c-a10d-f252b7d9692e/volumes" Feb 03 12:25:35 crc kubenswrapper[4679]: E0203 12:25:35.679157 4679 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9ca558_ad13_4599_80e5_05be55c84a55.slice/crio-0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9ca558_ad13_4599_80e5_05be55c84a55.slice/crio-conmon-0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:25:35 crc kubenswrapper[4679]: I0203 12:25:35.869226 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-8kvc8" podUID="6dc62f2a-c70d-4c2c-a10d-f252b7d9692e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Feb 03 12:25:36 crc kubenswrapper[4679]: I0203 12:25:36.362407 4679 generic.go:334] "Generic (PLEG): container finished" podID="bc9ca558-ad13-4599-80e5-05be55c84a55" containerID="0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98" exitCode=0 Feb 03 12:25:36 crc kubenswrapper[4679]: I0203 12:25:36.362488 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7n6pw" event={"ID":"bc9ca558-ad13-4599-80e5-05be55c84a55","Type":"ContainerDied","Data":"0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98"} Feb 03 12:25:36 crc kubenswrapper[4679]: I0203 12:25:36.735692 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:25:36 crc kubenswrapper[4679]: I0203 12:25:36.735801 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:25:38 crc kubenswrapper[4679]: I0203 12:25:38.537442 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 12:25:38 crc kubenswrapper[4679]: I0203 12:25:38.663448 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74557bdb5d-lsfq8" podUID="a09ad5f1-6af1-452d-a08f-271579ecb3d1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.416976 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7n6pw" event={"ID":"bc9ca558-ad13-4599-80e5-05be55c84a55","Type":"ContainerDied","Data":"49fcaaca2753fd2597225726d76eba6f1efc8bec4c50db05ced84b26552ada11"} Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.417387 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49fcaaca2753fd2597225726d76eba6f1efc8bec4c50db05ced84b26552ada11" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.473481 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.577691 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.578226 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.578265 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.578394 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.578414 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.578493 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxs6s\" (UniqueName: \"kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s\") pod \"bc9ca558-ad13-4599-80e5-05be55c84a55\" (UID: \"bc9ca558-ad13-4599-80e5-05be55c84a55\") " Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.583266 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.603856 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s" (OuterVolumeSpecName: "kube-api-access-jxs6s") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "kube-api-access-jxs6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.610645 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.612442 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts" (OuterVolumeSpecName: "scripts") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.683235 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.683279 4679 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.683298 4679 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9ca558-ad13-4599-80e5-05be55c84a55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.683312 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxs6s\" (UniqueName: \"kubernetes.io/projected/bc9ca558-ad13-4599-80e5-05be55c84a55-kube-api-access-jxs6s\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.733430 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.749315 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6c4bcd4b-hkzh6"] Feb 03 12:25:39 crc kubenswrapper[4679]: E0203 12:25:39.759739 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.777231 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data" (OuterVolumeSpecName: "config-data") pod "bc9ca558-ad13-4599-80e5-05be55c84a55" (UID: "bc9ca558-ad13-4599-80e5-05be55c84a55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.785070 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:39 crc kubenswrapper[4679]: I0203 12:25:39.785111 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9ca558-ad13-4599-80e5-05be55c84a55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.481380 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerStarted","Data":"38b5aefe4ec48ce76fe2fea682de14b5136c0230210e75b0fe99f28a5da97056"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.482177 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="ceilometer-notification-agent" containerID="cri-o://70fd91c995d23e25c9ff114e07fe254c7c4b38e579e232159768192a05733c5a" gracePeriod=30 Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.483704 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.484400 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="proxy-httpd" containerID="cri-o://38b5aefe4ec48ce76fe2fea682de14b5136c0230210e75b0fe99f28a5da97056" gracePeriod=30 Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.489752 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="sg-core" containerID="cri-o://fe1e2e8f905eff2b462d27b0abc257ab3e598a35b698cb2cc299faac0441836c" gracePeriod=30 Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.523223 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-789dd74f99-dtwb4" event={"ID":"0786ef5c-404a-4c24-8188-d757082c1419","Type":"ContainerStarted","Data":"5e52387abb8db9a7ba51486ff2ef74799a824c485b53cb5584520a2b831657b8"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.523377 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-789dd74f99-dtwb4" event={"ID":"0786ef5c-404a-4c24-8188-d757082c1419","Type":"ContainerStarted","Data":"914c73185706d04b4c7abbf87f951b113512001ea07abd6a8b1a2f8b9544ae97"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.539042 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" event={"ID":"2aa77b26-ca52-4ef9-a1c2-68237a080e1b","Type":"ContainerStarted","Data":"8a28be6fcd3cc8ae54d5b62d780f3dbc9a37833cfdf56f3ecf05addba7e29767"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.539132 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" event={"ID":"2aa77b26-ca52-4ef9-a1c2-68237a080e1b","Type":"ContainerStarted","Data":"aee70d0d258634c10a47e7279f1d61274563fc998a0a698d929235194718d2b0"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.539149 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" event={"ID":"2aa77b26-ca52-4ef9-a1c2-68237a080e1b","Type":"ContainerStarted","Data":"af865c2d6a7599431cc167ddc4792aa4a728a78d4bcefdde7617b5368343839c"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.539645 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.539829 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.566965 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" event={"ID":"c316e952-31f5-42ef-bd28-f09412dd0118","Type":"ContainerStarted","Data":"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.569314 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.585350 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7n6pw" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.585899 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" event={"ID":"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f","Type":"ContainerStarted","Data":"63c2aa0b324743a47524c9faaf479d8a77e713bd0f39484e4cdff21cd950f0f3"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.589768 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" event={"ID":"91c5b9c5-d4c7-4138-90de-ee51de9f7a5f","Type":"ContainerStarted","Data":"d772172347b04ecbfd6d02e8a0eb2378bd9a40ffce06b0cc14517c7f94db95fe"} Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.603253 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-789dd74f99-dtwb4" podStartSLOduration=2.846682929 podStartE2EDuration="11.603219484s" podCreationTimestamp="2026-02-03 12:25:29 +0000 UTC" firstStartedPulling="2026-02-03 12:25:30.482860172 +0000 UTC m=+1202.957756260" lastFinishedPulling="2026-02-03 12:25:39.239396727 +0000 UTC m=+1211.714292815" observedRunningTime="2026-02-03 12:25:40.569147177 +0000 UTC m=+1213.044043275" watchObservedRunningTime="2026-02-03 12:25:40.603219484 +0000 UTC m=+1213.078115572" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.630217 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" podStartSLOduration=11.630182486 podStartE2EDuration="11.630182486s" podCreationTimestamp="2026-02-03 12:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:40.614260552 +0000 UTC m=+1213.089156660" watchObservedRunningTime="2026-02-03 12:25:40.630182486 +0000 UTC m=+1213.105078594" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.690576 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" podStartSLOduration=8.690544178 podStartE2EDuration="8.690544178s" podCreationTimestamp="2026-02-03 12:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:40.681943574 +0000 UTC m=+1213.156839662" watchObservedRunningTime="2026-02-03 12:25:40.690544178 +0000 UTC m=+1213.165440266" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.724168 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7d4655f6d4-rdwtj" podStartSLOduration=3.053511425 podStartE2EDuration="11.724115232s" podCreationTimestamp="2026-02-03 12:25:29 +0000 UTC" firstStartedPulling="2026-02-03 12:25:30.568662286 +0000 UTC m=+1203.043558374" lastFinishedPulling="2026-02-03 12:25:39.239266093 +0000 UTC m=+1211.714162181" observedRunningTime="2026-02-03 12:25:40.70868347 +0000 UTC m=+1213.183579558" watchObservedRunningTime="2026-02-03 12:25:40.724115232 +0000 UTC m=+1213.199011330" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.764496 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:40 crc kubenswrapper[4679]: E0203 12:25:40.765058 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" containerName="cinder-db-sync" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.765075 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" containerName="cinder-db-sync" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.765295 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" containerName="cinder-db-sync" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.789418 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.795302 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.795535 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nbb4v" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.803242 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.816831 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.817048 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.863940 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.864002 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.864032 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sqr\" (UniqueName: \"kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.864057 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.864133 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.878026 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.937647 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980347 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980438 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980464 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2sqr\" (UniqueName: \"kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980483 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980556 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.980595 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.981749 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.989470 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.992192 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.993005 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:40 crc kubenswrapper[4679]: I0203 12:25:40.993769 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.004773 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.012085 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.016416 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2sqr\" (UniqueName: \"kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr\") pod \"cinder-scheduler-0\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.019412 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083221 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083273 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29bh\" (UniqueName: \"kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083331 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083379 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083436 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.083526 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.158393 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.179820 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.181665 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185270 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185321 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29bh\" (UniqueName: \"kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185372 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185401 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185444 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.185509 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.186591 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.186835 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.187328 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.189326 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.190336 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.194338 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.206140 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.226508 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29bh\" (UniqueName: \"kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh\") pod \"dnsmasq-dns-5c9776ccc5-d29d9\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289042 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289092 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289135 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289238 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289265 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289297 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhkp\" (UniqueName: \"kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.289350 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392269 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392389 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvhkp\" (UniqueName: \"kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392471 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392585 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392612 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392640 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.392724 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.393457 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.393662 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.399558 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.402076 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.409594 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.414012 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.416197 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.426656 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvhkp\" (UniqueName: \"kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp\") pod \"cinder-api-0\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.614074 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.687684 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerID="38b5aefe4ec48ce76fe2fea682de14b5136c0230210e75b0fe99f28a5da97056" exitCode=0 Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.687726 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerID="fe1e2e8f905eff2b462d27b0abc257ab3e598a35b698cb2cc299faac0441836c" exitCode=2 Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.687733 4679 generic.go:334] "Generic (PLEG): container finished" podID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerID="70fd91c995d23e25c9ff114e07fe254c7c4b38e579e232159768192a05733c5a" exitCode=0 Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.689141 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerDied","Data":"38b5aefe4ec48ce76fe2fea682de14b5136c0230210e75b0fe99f28a5da97056"} Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.689179 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerDied","Data":"fe1e2e8f905eff2b462d27b0abc257ab3e598a35b698cb2cc299faac0441836c"} Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.689225 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerDied","Data":"70fd91c995d23e25c9ff114e07fe254c7c4b38e579e232159768192a05733c5a"} Feb 03 12:25:41 crc kubenswrapper[4679]: I0203 12:25:41.857925 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:42 crc kubenswrapper[4679]: W0203 12:25:42.104242 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod826b897e_db6e_4e5c_a3a4_c4b78b1cd377.slice/crio-42d779c1a47c15ce01285413c81f48c0e3e1c4c6088150d85a4593a0969aec40 WatchSource:0}: Error finding container 42d779c1a47c15ce01285413c81f48c0e3e1c4c6088150d85a4593a0969aec40: Status 404 returned error can't find the container with id 42d779c1a47c15ce01285413c81f48c0e3e1c4c6088150d85a4593a0969aec40 Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.110930 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.507053 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583446 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vld\" (UniqueName: \"kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583628 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583766 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583812 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583901 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583937 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.583973 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data\") pod \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\" (UID: \"2d70a61e-3ae9-4111-9a4d-6bc363fb09db\") " Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.609071 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.609200 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.622317 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts" (OuterVolumeSpecName: "scripts") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.645694 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld" (OuterVolumeSpecName: "kube-api-access-j2vld") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "kube-api-access-j2vld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.668393 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.696200 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.696257 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.696273 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.696284 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vld\" (UniqueName: \"kubernetes.io/projected/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-kube-api-access-j2vld\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.719098 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerStarted","Data":"6c510d9ebd73a863b0a892ba5bf1f483f6154f66d7a1c9b2949874b5794a09bc"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.744426 4679 generic.go:334] "Generic (PLEG): container finished" podID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerID="07fae2ab84183685cc0f50aec61e2eeea49ee326b970f684e650741fc52f68d0" exitCode=137 Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.744490 4679 generic.go:334] "Generic (PLEG): container finished" podID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerID="4184193abacd23b537ac7046ae7bf8fd557a5e807c933fd578d176dc710fa9e4" exitCode=137 Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.744568 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerDied","Data":"07fae2ab84183685cc0f50aec61e2eeea49ee326b970f684e650741fc52f68d0"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.744621 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerDied","Data":"4184193abacd23b537ac7046ae7bf8fd557a5e807c933fd578d176dc710fa9e4"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.780633 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d70a61e-3ae9-4111-9a4d-6bc363fb09db","Type":"ContainerDied","Data":"70ef0ac06a9432360f47e13a91b6536fbe2b345abca45776129e0f0fbe1e84b3"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.780721 4679 scope.go:117] "RemoveContainer" containerID="38b5aefe4ec48ce76fe2fea682de14b5136c0230210e75b0fe99f28a5da97056" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.780912 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.784760 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.787283 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" event={"ID":"826b897e-db6e-4e5c-a3a4-c4b78b1cd377","Type":"ContainerStarted","Data":"42d779c1a47c15ce01285413c81f48c0e3e1c4c6088150d85a4593a0969aec40"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.792466 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="dnsmasq-dns" containerID="cri-o://463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059" gracePeriod=10 Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.792807 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerStarted","Data":"527b5e3f7b450e26684ac1ad7b0f122f67cce1b96a9b8e8f3cfefc28b6b95765"} Feb 03 12:25:42 crc kubenswrapper[4679]: I0203 12:25:42.806352 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.098564 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data" (OuterVolumeSpecName: "config-data") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.117117 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.128303 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d70a61e-3ae9-4111-9a4d-6bc363fb09db" (UID: "2d70a61e-3ae9-4111-9a4d-6bc363fb09db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.218281 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.219885 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d70a61e-3ae9-4111-9a4d-6bc363fb09db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.240949 4679 scope.go:117] "RemoveContainer" containerID="fe1e2e8f905eff2b462d27b0abc257ab3e598a35b698cb2cc299faac0441836c" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.259629 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.326803 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data\") pod \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.327310 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts\") pod \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.327374 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key\") pod \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.327427 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65z6c\" (UniqueName: \"kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c\") pod \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.327482 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs\") pod \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\" (UID: \"79aa98a4-ef64-4380-a7e1-4d2b04f7279f\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.330847 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs" (OuterVolumeSpecName: "logs") pod "79aa98a4-ef64-4380-a7e1-4d2b04f7279f" (UID: "79aa98a4-ef64-4380-a7e1-4d2b04f7279f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.349723 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c" (OuterVolumeSpecName: "kube-api-access-65z6c") pod "79aa98a4-ef64-4380-a7e1-4d2b04f7279f" (UID: "79aa98a4-ef64-4380-a7e1-4d2b04f7279f"). InnerVolumeSpecName "kube-api-access-65z6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.371574 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data" (OuterVolumeSpecName: "config-data") pod "79aa98a4-ef64-4380-a7e1-4d2b04f7279f" (UID: "79aa98a4-ef64-4380-a7e1-4d2b04f7279f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.378946 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "79aa98a4-ef64-4380-a7e1-4d2b04f7279f" (UID: "79aa98a4-ef64-4380-a7e1-4d2b04f7279f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.394874 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.422604 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts" (OuterVolumeSpecName: "scripts") pod "79aa98a4-ef64-4380-a7e1-4d2b04f7279f" (UID: "79aa98a4-ef64-4380-a7e1-4d2b04f7279f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.431371 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.431424 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.431438 4679 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.431454 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65z6c\" (UniqueName: \"kubernetes.io/projected/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-kube-api-access-65z6c\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.431468 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79aa98a4-ef64-4380-a7e1-4d2b04f7279f-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.538589 4679 scope.go:117] "RemoveContainer" containerID="70fd91c995d23e25c9ff114e07fe254c7c4b38e579e232159768192a05733c5a" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.697274 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.728487 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.740715 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:25:43 crc kubenswrapper[4679]: E0203 12:25:43.741478 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="ceilometer-notification-agent" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741495 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="ceilometer-notification-agent" Feb 03 12:25:43 crc kubenswrapper[4679]: E0203 12:25:43.741509 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741515 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon" Feb 03 12:25:43 crc kubenswrapper[4679]: E0203 12:25:43.741529 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="proxy-httpd" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741535 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="proxy-httpd" Feb 03 12:25:43 crc kubenswrapper[4679]: E0203 12:25:43.741557 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="sg-core" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741563 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="sg-core" Feb 03 12:25:43 crc kubenswrapper[4679]: E0203 12:25:43.741579 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon-log" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741585 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon-log" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741861 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="ceilometer-notification-agent" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741882 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="proxy-httpd" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741891 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741907 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" containerName="sg-core" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.741917 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" containerName="horizon-log" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.744147 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.748324 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.757146 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.758232 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.802604 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.829660 4679 generic.go:334] "Generic (PLEG): container finished" podID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerID="fd14f27cac64aec0c46d763e4b85336924a7a2c29f3ae1c3ec0acc8f42535002" exitCode=0 Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.829816 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" event={"ID":"826b897e-db6e-4e5c-a3a4-c4b78b1cd377","Type":"ContainerDied","Data":"fd14f27cac64aec0c46d763e4b85336924a7a2c29f3ae1c3ec0acc8f42535002"} Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845565 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845641 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845681 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845731 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845828 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845892 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.845917 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7bl\" (UniqueName: \"kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.881628 4679 generic.go:334] "Generic (PLEG): container finished" podID="c316e952-31f5-42ef-bd28-f09412dd0118" containerID="463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059" exitCode=0 Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.881782 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" event={"ID":"c316e952-31f5-42ef-bd28-f09412dd0118","Type":"ContainerDied","Data":"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059"} Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.881822 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" event={"ID":"c316e952-31f5-42ef-bd28-f09412dd0118","Type":"ContainerDied","Data":"34a8d1e271deb757287af88b2e4e83c05df853d1846a03e6ff77bbe6909a0cc2"} Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.881843 4679 scope.go:117] "RemoveContainer" containerID="463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.882558 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.896165 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84b5f78fb9-n9b9g" event={"ID":"79aa98a4-ef64-4380-a7e1-4d2b04f7279f","Type":"ContainerDied","Data":"910d49890c1c821a24dc0776f520f5cff08df005b6e7cf22a7deea502e33e331"} Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.896303 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84b5f78fb9-n9b9g" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947377 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947430 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947458 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947520 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947659 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcnmj\" (UniqueName: \"kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.947839 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc\") pod \"c316e952-31f5-42ef-bd28-f09412dd0118\" (UID: \"c316e952-31f5-42ef-bd28-f09412dd0118\") " Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948148 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948293 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948402 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948429 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7bl\" (UniqueName: \"kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948504 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948527 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.948581 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.955117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.956768 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.958147 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.964301 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.966820 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.970712 4679 scope.go:117] "RemoveContainer" containerID="55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.979042 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84b5f78fb9-n9b9g"] Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.979722 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.979915 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj" (OuterVolumeSpecName: "kube-api-access-kcnmj") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "kube-api-access-kcnmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:43 crc kubenswrapper[4679]: I0203 12:25:43.986436 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.000188 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7bl\" (UniqueName: \"kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl\") pod \"ceilometer-0\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " pod="openstack/ceilometer-0" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.009080 4679 scope.go:117] "RemoveContainer" containerID="463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059" Feb 03 12:25:44 crc kubenswrapper[4679]: E0203 12:25:44.012813 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059\": container with ID starting with 463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059 not found: ID does not exist" containerID="463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.012891 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059"} err="failed to get container status \"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059\": rpc error: code = NotFound desc = could not find container \"463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059\": container with ID starting with 463043d31b58c87b17db23ef9205d6fc9cd10f14699a24e1a5d07b766bf82059 not found: ID does not exist" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.012933 4679 scope.go:117] "RemoveContainer" containerID="55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a" Feb 03 12:25:44 crc kubenswrapper[4679]: E0203 12:25:44.013806 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a\": container with ID starting with 55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a not found: ID does not exist" containerID="55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.013861 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a"} err="failed to get container status \"55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a\": rpc error: code = NotFound desc = could not find container \"55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a\": container with ID starting with 55e5ea59a5b5fbcf84fb6a2c4937df6ae78ac2c4f82ae1a44d9007b23af8cd1a not found: ID does not exist" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.013907 4679 scope.go:117] "RemoveContainer" containerID="07fae2ab84183685cc0f50aec61e2eeea49ee326b970f684e650741fc52f68d0" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.050443 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.053431 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.053694 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcnmj\" (UniqueName: \"kubernetes.io/projected/c316e952-31f5-42ef-bd28-f09412dd0118-kube-api-access-kcnmj\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.067958 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.071044 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.076862 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.082458 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.096506 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.106624 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config" (OuterVolumeSpecName: "config") pod "c316e952-31f5-42ef-bd28-f09412dd0118" (UID: "c316e952-31f5-42ef-bd28-f09412dd0118"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.163411 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.163455 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.163466 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.163476 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c316e952-31f5-42ef-bd28-f09412dd0118-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.234011 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d70a61e-3ae9-4111-9a4d-6bc363fb09db" path="/var/lib/kubelet/pods/2d70a61e-3ae9-4111-9a4d-6bc363fb09db/volumes" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.235499 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79aa98a4-ef64-4380-a7e1-4d2b04f7279f" path="/var/lib/kubelet/pods/79aa98a4-ef64-4380-a7e1-4d2b04f7279f/volumes" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.369820 4679 scope.go:117] "RemoveContainer" containerID="4184193abacd23b537ac7046ae7bf8fd557a5e807c933fd578d176dc710fa9e4" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.919013 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerStarted","Data":"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d"} Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.923949 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerStarted","Data":"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6"} Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.937969 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" event={"ID":"826b897e-db6e-4e5c-a3a4-c4b78b1cd377","Type":"ContainerStarted","Data":"fd64be3c7267c16b4462d55443bfe38b05347f083d3440bf06e18a78c62abfba"} Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.938166 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.969533 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:25:44 crc kubenswrapper[4679]: I0203 12:25:44.985674 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" podStartSLOduration=4.98563995 podStartE2EDuration="4.98563995s" podCreationTimestamp="2026-02-03 12:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:44.959180981 +0000 UTC m=+1217.434077059" watchObservedRunningTime="2026-02-03 12:25:44.98563995 +0000 UTC m=+1217.460536048" Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.951693 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerStarted","Data":"ef333dfd8ffb3353933bc6f48b449bb91a9b778fc7dbdabd3b777b6b3ac6b958"} Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.952705 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerStarted","Data":"ad8653712dee83539ad3960d82334df6cda2f0974c605175e22f3082355f6ed1"} Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.953916 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerStarted","Data":"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144"} Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.958333 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerStarted","Data":"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138"} Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.958451 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api-log" containerID="cri-o://c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" gracePeriod=30 Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.958679 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api" containerID="cri-o://1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" gracePeriod=30 Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.959036 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 12:25:45 crc kubenswrapper[4679]: I0203 12:25:45.994811 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.56581276 podStartE2EDuration="5.994779552s" podCreationTimestamp="2026-02-03 12:25:40 +0000 UTC" firstStartedPulling="2026-02-03 12:25:41.896266419 +0000 UTC m=+1214.371162507" lastFinishedPulling="2026-02-03 12:25:43.325233211 +0000 UTC m=+1215.800129299" observedRunningTime="2026-02-03 12:25:45.979001181 +0000 UTC m=+1218.453897279" watchObservedRunningTime="2026-02-03 12:25:45.994779552 +0000 UTC m=+1218.469675650" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.015775 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.015727448 podStartE2EDuration="5.015727448s" podCreationTimestamp="2026-02-03 12:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:46.00508296 +0000 UTC m=+1218.479979048" watchObservedRunningTime="2026-02-03 12:25:46.015727448 +0000 UTC m=+1218.490623526" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.059656 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.159302 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.607781 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.744915 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs" (OuterVolumeSpecName: "logs") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.744987 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.745059 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvhkp\" (UniqueName: \"kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746189 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746237 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746342 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746406 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746423 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle\") pod \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\" (UID: \"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9\") " Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.746830 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.747332 4679 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.747349 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.765420 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp" (OuterVolumeSpecName: "kube-api-access-mvhkp") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "kube-api-access-mvhkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.765428 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.765578 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts" (OuterVolumeSpecName: "scripts") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.786740 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.803573 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data" (OuterVolumeSpecName: "config-data") pod "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" (UID: "753ceaf8-2f7e-4021-9027-f3f41eb7a7c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.849455 4679 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.849496 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.849509 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvhkp\" (UniqueName: \"kubernetes.io/projected/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-kube-api-access-mvhkp\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.849521 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.849533 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.979595 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerStarted","Data":"d125ca68c1b41f8a3e7cec808f9671b075ae7c419ea999fc8c34604964525143"} Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983442 4679 generic.go:334] "Generic (PLEG): container finished" podID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerID="1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" exitCode=0 Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983489 4679 generic.go:334] "Generic (PLEG): container finished" podID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerID="c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" exitCode=143 Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983592 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983671 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerDied","Data":"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138"} Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983771 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerDied","Data":"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6"} Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983807 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"753ceaf8-2f7e-4021-9027-f3f41eb7a7c9","Type":"ContainerDied","Data":"6c510d9ebd73a863b0a892ba5bf1f483f6154f66d7a1c9b2949874b5794a09bc"} Feb 03 12:25:46 crc kubenswrapper[4679]: I0203 12:25:46.983908 4679 scope.go:117] "RemoveContainer" containerID="1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.027552 4679 scope.go:117] "RemoveContainer" containerID="c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.036943 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.052402 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.064013 4679 scope.go:117] "RemoveContainer" containerID="1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.071824 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138\": container with ID starting with 1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138 not found: ID does not exist" containerID="1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.071886 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138"} err="failed to get container status \"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138\": rpc error: code = NotFound desc = could not find container \"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138\": container with ID starting with 1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138 not found: ID does not exist" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.071972 4679 scope.go:117] "RemoveContainer" containerID="c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.072628 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6\": container with ID starting with c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6 not found: ID does not exist" containerID="c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.072659 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6"} err="failed to get container status \"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6\": rpc error: code = NotFound desc = could not find container \"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6\": container with ID starting with c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6 not found: ID does not exist" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.072679 4679 scope.go:117] "RemoveContainer" containerID="1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.072874 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.073546 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api-log" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.073648 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api-log" Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.073729 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.073802 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api" Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.073902 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="init" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.073984 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="init" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.073807 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138"} err="failed to get container status \"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138\": rpc error: code = NotFound desc = could not find container \"1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138\": container with ID starting with 1b7ec161d149670961a2effb531600b10e80fdddb4f5ef91339c382cfe9e5138 not found: ID does not exist" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074067 4679 scope.go:117] "RemoveContainer" containerID="c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6" Feb 03 12:25:47 crc kubenswrapper[4679]: E0203 12:25:47.074186 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="dnsmasq-dns" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074247 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="dnsmasq-dns" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074507 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" containerName="dnsmasq-dns" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074607 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074687 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" containerName="cinder-api-log" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.074611 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6"} err="failed to get container status \"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6\": rpc error: code = NotFound desc = could not find container \"c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6\": container with ID starting with c5f349eebcc1ebb55cf6785cacd579c3cc93674e3b7fa8fc3fef5f0a93dd18a6 not found: ID does not exist" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.075995 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.081010 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.081098 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.081193 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.093909 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157498 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157600 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-scripts\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157648 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbe4378f-83bf-420b-b73a-185c57ab9771-logs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157806 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljwc\" (UniqueName: \"kubernetes.io/projected/bbe4378f-83bf-420b-b73a-185c57ab9771-kube-api-access-6ljwc\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157873 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157906 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.157985 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe4378f-83bf-420b-b73a-185c57ab9771-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.158132 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.158257 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.261582 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbe4378f-83bf-420b-b73a-185c57ab9771-logs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.261749 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljwc\" (UniqueName: \"kubernetes.io/projected/bbe4378f-83bf-420b-b73a-185c57ab9771-kube-api-access-6ljwc\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.261810 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.261878 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.261958 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe4378f-83bf-420b-b73a-185c57ab9771-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262028 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262097 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe4378f-83bf-420b-b73a-185c57ab9771-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262097 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbe4378f-83bf-420b-b73a-185c57ab9771-logs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262369 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262432 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.262541 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-scripts\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.266493 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.267332 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-scripts\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.267461 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.267513 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.271176 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.284891 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljwc\" (UniqueName: \"kubernetes.io/projected/bbe4378f-83bf-420b-b73a-185c57ab9771-kube-api-access-6ljwc\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.291316 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe4378f-83bf-420b-b73a-185c57ab9771-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbe4378f-83bf-420b-b73a-185c57ab9771\") " pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.410136 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 12:25:47 crc kubenswrapper[4679]: I0203 12:25:47.907380 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 12:25:48 crc kubenswrapper[4679]: I0203 12:25:48.000840 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbe4378f-83bf-420b-b73a-185c57ab9771","Type":"ContainerStarted","Data":"0415c4b9d46bdd94853887704a728f298dff1f614fec8f76f19904b73a149119"} Feb 03 12:25:48 crc kubenswrapper[4679]: I0203 12:25:48.013500 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerStarted","Data":"3f2e5de3c76b494383c477d2712a4bd28c70ae0c518a0fc7e645aa099fcecb8e"} Feb 03 12:25:48 crc kubenswrapper[4679]: I0203 12:25:48.226396 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753ceaf8-2f7e-4021-9027-f3f41eb7a7c9" path="/var/lib/kubelet/pods/753ceaf8-2f7e-4021-9027-f3f41eb7a7c9/volumes" Feb 03 12:25:48 crc kubenswrapper[4679]: I0203 12:25:48.303440 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.057774 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbe4378f-83bf-420b-b73a-185c57ab9771","Type":"ContainerStarted","Data":"0b0052be32f3edabe245f69f664925754c42a55b2954f5361962e581b4f822a0"} Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.535698 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.801782 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.929458 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.929913 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" containerID="cri-o://387b2bf944122d2d71bb49301861367f3d343a4dde74a38f905059a695381e43" gracePeriod=30 Feb 03 12:25:49 crc kubenswrapper[4679]: I0203 12:25:49.930156 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" containerID="cri-o://0a6af023cca3a3cd2012aafa83fe7fb0ed8eb324a4dc0df9134209f0cc4fec72" gracePeriod=30 Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.127578 4679 generic.go:334] "Generic (PLEG): container finished" podID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerID="387b2bf944122d2d71bb49301861367f3d343a4dde74a38f905059a695381e43" exitCode=143 Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.128109 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerDied","Data":"387b2bf944122d2d71bb49301861367f3d343a4dde74a38f905059a695381e43"} Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.140487 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbe4378f-83bf-420b-b73a-185c57ab9771","Type":"ContainerStarted","Data":"99f2b955e79ab4278be42b72bd859833e1052f6bb2b2c178429c7d5dede13217"} Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.140891 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.146495 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.194133 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.194109862 podStartE2EDuration="3.194109862s" podCreationTimestamp="2026-02-03 12:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:25:50.191913594 +0000 UTC m=+1222.666809692" watchObservedRunningTime="2026-02-03 12:25:50.194109862 +0000 UTC m=+1222.669005950" Feb 03 12:25:50 crc kubenswrapper[4679]: I0203 12:25:50.226712 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.777214959 podStartE2EDuration="7.22668656s" podCreationTimestamp="2026-02-03 12:25:43 +0000 UTC" firstStartedPulling="2026-02-03 12:25:44.987765615 +0000 UTC m=+1217.462661703" lastFinishedPulling="2026-02-03 12:25:49.437237216 +0000 UTC m=+1221.912133304" observedRunningTime="2026-02-03 12:25:50.221635998 +0000 UTC m=+1222.696532086" watchObservedRunningTime="2026-02-03 12:25:50.22668656 +0000 UTC m=+1222.701582648" Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.160758 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerStarted","Data":"d5562ced629fb4395f0c3713b3687397a23931c3b43470db26fa4057d82e6b49"} Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.404739 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.421374 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.425068 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.501119 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.501640 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="dnsmasq-dns" containerID="cri-o://697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba" gracePeriod=10 Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.556462 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:51 crc kubenswrapper[4679]: I0203 12:25:51.565976 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.147042 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172396 4679 generic.go:334] "Generic (PLEG): container finished" podID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerID="697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba" exitCode=0 Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172491 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" event={"ID":"b340ddef-8a7b-459d-af05-97756d80e7eb","Type":"ContainerDied","Data":"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba"} Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172582 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" event={"ID":"b340ddef-8a7b-459d-af05-97756d80e7eb","Type":"ContainerDied","Data":"c384b06cb6f5594a9b4599ffc99f73631aa7ddd8e531a6b4b55418a6b36c6f92"} Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172611 4679 scope.go:117] "RemoveContainer" containerID="697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172523 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-p8xhk" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.172792 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="cinder-scheduler" containerID="cri-o://51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d" gracePeriod=30 Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.173012 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="probe" containerID="cri-o://88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144" gracePeriod=30 Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.217299 4679 scope.go:117] "RemoveContainer" containerID="4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.245836 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.246610 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.246797 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwrf4\" (UniqueName: \"kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.274321 4679 scope.go:117] "RemoveContainer" containerID="697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba" Feb 03 12:25:52 crc kubenswrapper[4679]: E0203 12:25:52.274942 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba\": container with ID starting with 697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba not found: ID does not exist" containerID="697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.274983 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba"} err="failed to get container status \"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba\": rpc error: code = NotFound desc = could not find container \"697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba\": container with ID starting with 697df255fc05cf34f3e78ee4a14cf5c3ee060e304b8a58874a42b2816cfd8eba not found: ID does not exist" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.275014 4679 scope.go:117] "RemoveContainer" containerID="4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a" Feb 03 12:25:52 crc kubenswrapper[4679]: E0203 12:25:52.275336 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a\": container with ID starting with 4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a not found: ID does not exist" containerID="4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.275388 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a"} err="failed to get container status \"4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a\": rpc error: code = NotFound desc = could not find container \"4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a\": container with ID starting with 4d4cbf60d6085a12e146fe5bd8341f4f108ff925cec18cbfdd81c6493d8b960a not found: ID does not exist" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.276666 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4" (OuterVolumeSpecName: "kube-api-access-zwrf4") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "kube-api-access-zwrf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.330425 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.348457 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.349705 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.349777 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.349857 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb\") pod \"b340ddef-8a7b-459d-af05-97756d80e7eb\" (UID: \"b340ddef-8a7b-459d-af05-97756d80e7eb\") " Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.350319 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.350337 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwrf4\" (UniqueName: \"kubernetes.io/projected/b340ddef-8a7b-459d-af05-97756d80e7eb-kube-api-access-zwrf4\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.350483 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.408955 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config" (OuterVolumeSpecName: "config") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.410050 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.421541 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b340ddef-8a7b-459d-af05-97756d80e7eb" (UID: "b340ddef-8a7b-459d-af05-97756d80e7eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.453211 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.453277 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.453297 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b340ddef-8a7b-459d-af05-97756d80e7eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.511725 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:25:52 crc kubenswrapper[4679]: I0203 12:25:52.521525 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-p8xhk"] Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.227116 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" path="/var/lib/kubelet/pods/b340ddef-8a7b-459d-af05-97756d80e7eb/volumes" Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.230765 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-74557bdb5d-lsfq8" Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.246758 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b98ff4cb5-lk4d9" Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.319003 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.319407 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon-log" containerID="cri-o://3d45b94948bb9d6ae9e9251381d37c498a102b8b8646ee693a3ee3f1edcbb7f5" gracePeriod=30 Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.319616 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" containerID="cri-o://8b50418522d858f152f94f7287764ae5b114c63d74924de8cc696f45b2e543fc" gracePeriod=30 Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.329674 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.364860 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.365228 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54f6c577-sh4k7" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-api" containerID="cri-o://c2967118f4e04fe635e234e603378499b10810ba391817010ef28f04e388a11a" gracePeriod=30 Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.365822 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54f6c577-sh4k7" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-httpd" containerID="cri-o://dc9a41aed40a2ec3da74c68eb6358cac759e6eff74b946a53bc543ef78a242f1" gracePeriod=30 Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.895530 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:33136->10.217.0.162:9311: read: connection reset by peer" Feb 03 12:25:54 crc kubenswrapper[4679]: I0203 12:25:54.895854 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:33132->10.217.0.162:9311: read: connection reset by peer" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.058815 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.058946 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d9bddd974-qkcc6" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.257988 4679 generic.go:334] "Generic (PLEG): container finished" podID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerID="88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144" exitCode=0 Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.258131 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerDied","Data":"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144"} Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.271476 4679 generic.go:334] "Generic (PLEG): container finished" podID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerID="0a6af023cca3a3cd2012aafa83fe7fb0ed8eb324a4dc0df9134209f0cc4fec72" exitCode=0 Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.271535 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerDied","Data":"0a6af023cca3a3cd2012aafa83fe7fb0ed8eb324a4dc0df9134209f0cc4fec72"} Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.576430 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.657201 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle\") pod \"7d506818-e4fd-46d6-a225-d1685dea3d6d\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.657428 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs\") pod \"7d506818-e4fd-46d6-a225-d1685dea3d6d\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.657487 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt4fh\" (UniqueName: \"kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh\") pod \"7d506818-e4fd-46d6-a225-d1685dea3d6d\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.657532 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data\") pod \"7d506818-e4fd-46d6-a225-d1685dea3d6d\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.657586 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom\") pod \"7d506818-e4fd-46d6-a225-d1685dea3d6d\" (UID: \"7d506818-e4fd-46d6-a225-d1685dea3d6d\") " Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.660037 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs" (OuterVolumeSpecName: "logs") pod "7d506818-e4fd-46d6-a225-d1685dea3d6d" (UID: "7d506818-e4fd-46d6-a225-d1685dea3d6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.672214 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7d506818-e4fd-46d6-a225-d1685dea3d6d" (UID: "7d506818-e4fd-46d6-a225-d1685dea3d6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.697562 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh" (OuterVolumeSpecName: "kube-api-access-jt4fh") pod "7d506818-e4fd-46d6-a225-d1685dea3d6d" (UID: "7d506818-e4fd-46d6-a225-d1685dea3d6d"). InnerVolumeSpecName "kube-api-access-jt4fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.761932 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d506818-e4fd-46d6-a225-d1685dea3d6d-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.761986 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt4fh\" (UniqueName: \"kubernetes.io/projected/7d506818-e4fd-46d6-a225-d1685dea3d6d-kube-api-access-jt4fh\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.762003 4679 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.785223 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d506818-e4fd-46d6-a225-d1685dea3d6d" (UID: "7d506818-e4fd-46d6-a225-d1685dea3d6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.829237 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data" (OuterVolumeSpecName: "config-data") pod "7d506818-e4fd-46d6-a225-d1685dea3d6d" (UID: "7d506818-e4fd-46d6-a225-d1685dea3d6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.866201 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:55 crc kubenswrapper[4679]: I0203 12:25:55.866265 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d506818-e4fd-46d6-a225-d1685dea3d6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.343556 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d9bddd974-qkcc6" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.352502 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d9bddd974-qkcc6" event={"ID":"7d506818-e4fd-46d6-a225-d1685dea3d6d","Type":"ContainerDied","Data":"4ec5bc1e97206390dcaac8189c86d5cbb9771437120c042b321f69c79cb5c020"} Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.352599 4679 scope.go:117] "RemoveContainer" containerID="0a6af023cca3a3cd2012aafa83fe7fb0ed8eb324a4dc0df9134209f0cc4fec72" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.380159 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/1.log" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.387029 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-api/0.log" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.387109 4679 generic.go:334] "Generic (PLEG): container finished" podID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerID="65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7" exitCode=137 Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.387241 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerDied","Data":"65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7"} Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.456208 4679 generic.go:334] "Generic (PLEG): container finished" podID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerID="dc9a41aed40a2ec3da74c68eb6358cac759e6eff74b946a53bc543ef78a242f1" exitCode=0 Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.458162 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerDied","Data":"dc9a41aed40a2ec3da74c68eb6358cac759e6eff74b946a53bc543ef78a242f1"} Feb 03 12:25:56 crc kubenswrapper[4679]: E0203 12:25:56.505008 4679 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02427941_88ff_43f4_a73a_2048cf4b0e7c.slice/crio-conmon-65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02427941_88ff_43f4_a73a_2048cf4b0e7c.slice/crio-65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7.scope\": RecentStats: unable to find data in memory cache]" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.529000 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.589585 4679 scope.go:117] "RemoveContainer" containerID="387b2bf944122d2d71bb49301861367f3d343a4dde74a38f905059a695381e43" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.592919 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d9bddd974-qkcc6"] Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.917759 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:56 crc kubenswrapper[4679]: I0203 12:25:56.985458 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.338423 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/1.log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.341069 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-api/0.log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.341160 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.355148 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.368990 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqqf2\" (UniqueName: \"kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2\") pod \"02427941-88ff-43f4-a73a-2048cf4b0e7c\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.369099 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config\") pod \"02427941-88ff-43f4-a73a-2048cf4b0e7c\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.369225 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle\") pod \"02427941-88ff-43f4-a73a-2048cf4b0e7c\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.369256 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs\") pod \"02427941-88ff-43f4-a73a-2048cf4b0e7c\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.369398 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config\") pod \"02427941-88ff-43f4-a73a-2048cf4b0e7c\" (UID: \"02427941-88ff-43f4-a73a-2048cf4b0e7c\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.396424 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "02427941-88ff-43f4-a73a-2048cf4b0e7c" (UID: "02427941-88ff-43f4-a73a-2048cf4b0e7c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.447696 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2" (OuterVolumeSpecName: "kube-api-access-qqqf2") pod "02427941-88ff-43f4-a73a-2048cf4b0e7c" (UID: "02427941-88ff-43f4-a73a-2048cf4b0e7c"). InnerVolumeSpecName "kube-api-access-qqqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.475748 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.475931 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.475990 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.476082 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.476156 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.476209 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2sqr\" (UniqueName: \"kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr\") pod \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\" (UID: \"d775db07-db5d-47fc-a76a-d70c5dda4aa4\") " Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.476699 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqqf2\" (UniqueName: \"kubernetes.io/projected/02427941-88ff-43f4-a73a-2048cf4b0e7c-kube-api-access-qqqf2\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.476717 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.477668 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.517836 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts" (OuterVolumeSpecName: "scripts") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.517911 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.518025 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr" (OuterVolumeSpecName: "kube-api-access-f2sqr") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "kube-api-access-f2sqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.525076 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-httpd/1.log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.533595 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b5df755bd-sgmzz_02427941-88ff-43f4-a73a-2048cf4b0e7c/neutron-api/0.log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.533724 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b5df755bd-sgmzz" event={"ID":"02427941-88ff-43f4-a73a-2048cf4b0e7c","Type":"ContainerDied","Data":"efc568263fb215b4d1ec960708e6d6579b8cfb5b8ca58ac69b2aabfea4b2dc78"} Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.533779 4679 scope.go:117] "RemoveContainer" containerID="8eeffccde05570e59c434f4f3116cbfcd0459f2467683bfd18a1a1623ba14db8" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.533922 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b5df755bd-sgmzz" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.568903 4679 generic.go:334] "Generic (PLEG): container finished" podID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerID="51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d" exitCode=0 Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.570079 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.570596 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerDied","Data":"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d"} Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.570670 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d775db07-db5d-47fc-a76a-d70c5dda4aa4","Type":"ContainerDied","Data":"527b5e3f7b450e26684ac1ad7b0f122f67cce1b96a9b8e8f3cfefc28b6b95765"} Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.580150 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.580412 4679 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.582576 4679 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d775db07-db5d-47fc-a76a-d70c5dda4aa4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.582669 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2sqr\" (UniqueName: \"kubernetes.io/projected/d775db07-db5d-47fc-a76a-d70c5dda4aa4-kube-api-access-f2sqr\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.602674 4679 scope.go:117] "RemoveContainer" containerID="65f949a2b2b3fb20c98279c436924b83c9e00c12c1a5e5d4b05e111e15ce6bc7" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.628452 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02427941-88ff-43f4-a73a-2048cf4b0e7c" (UID: "02427941-88ff-43f4-a73a-2048cf4b0e7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.628584 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config" (OuterVolumeSpecName: "config") pod "02427941-88ff-43f4-a73a-2048cf4b0e7c" (UID: "02427941-88ff-43f4-a73a-2048cf4b0e7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.636567 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "02427941-88ff-43f4-a73a-2048cf4b0e7c" (UID: "02427941-88ff-43f4-a73a-2048cf4b0e7c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.646594 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.661634 4679 scope.go:117] "RemoveContainer" containerID="88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.669550 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c6c4bcd4b-hkzh6" podUID="2aa77b26-ca52-4ef9-a1c2-68237a080e1b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.686182 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.686227 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.686244 4679 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/02427941-88ff-43f4-a73a-2048cf4b0e7c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.686255 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.714702 4679 scope.go:117] "RemoveContainer" containerID="51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.776308 4679 scope.go:117] "RemoveContainer" containerID="88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.799433 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144\": container with ID starting with 88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144 not found: ID does not exist" containerID="88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.799547 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144"} err="failed to get container status \"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144\": rpc error: code = NotFound desc = could not find container \"88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144\": container with ID starting with 88a514b4d5bdcc00c65555d64d057493a521050ba0985c4e1a23bb75f4b8f144 not found: ID does not exist" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.799588 4679 scope.go:117] "RemoveContainer" containerID="51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.803499 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d\": container with ID starting with 51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d not found: ID does not exist" containerID="51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.803552 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d"} err="failed to get container status \"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d\": rpc error: code = NotFound desc = could not find container \"51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d\": container with ID starting with 51366b5a858e64ed73472b1d40f359bf2b7d11c74c341b87b50e04375d009c5d not found: ID does not exist" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.809607 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data" (OuterVolumeSpecName: "config-data") pod "d775db07-db5d-47fc-a76a-d70c5dda4aa4" (UID: "d775db07-db5d-47fc-a76a-d70c5dda4aa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.897764 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.903952 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d775db07-db5d-47fc-a76a-d70c5dda4aa4-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.911609 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b5df755bd-sgmzz"] Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.934175 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.948666 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.965427 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966164 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="init" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966190 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="init" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966211 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-api" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966220 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-api" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966232 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="cinder-scheduler" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966240 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="cinder-scheduler" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966255 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966266 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966279 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966287 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966306 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966316 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966328 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="probe" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966336 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="probe" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966383 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966392 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" Feb 03 12:25:57 crc kubenswrapper[4679]: E0203 12:25:57.966407 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="dnsmasq-dns" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966415 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="dnsmasq-dns" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966641 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b340ddef-8a7b-459d-af05-97756d80e7eb" containerName="dnsmasq-dns" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966656 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="probe" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966669 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966680 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-api" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966698 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" containerName="cinder-scheduler" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966714 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api-log" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.966726 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" containerName="barbican-api" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.967170 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" containerName="neutron-httpd" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.968060 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.972030 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 12:25:57 crc kubenswrapper[4679]: I0203 12:25:57.985401 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.006735 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-scripts\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.006844 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.006885 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.007036 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4882a26d-4240-46b5-917c-dc6842916963-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.007077 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j78lx\" (UniqueName: \"kubernetes.io/projected/4882a26d-4240-46b5-917c-dc6842916963-kube-api-access-j78lx\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.007116 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.108800 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j78lx\" (UniqueName: \"kubernetes.io/projected/4882a26d-4240-46b5-917c-dc6842916963-kube-api-access-j78lx\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.108887 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.108988 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-scripts\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.109017 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.109058 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.109150 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4882a26d-4240-46b5-917c-dc6842916963-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.109242 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4882a26d-4240-46b5-917c-dc6842916963-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.114028 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.115068 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-scripts\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.120210 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.121694 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4882a26d-4240-46b5-917c-dc6842916963-config-data\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.144732 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j78lx\" (UniqueName: \"kubernetes.io/projected/4882a26d-4240-46b5-917c-dc6842916963-kube-api-access-j78lx\") pod \"cinder-scheduler-0\" (UID: \"4882a26d-4240-46b5-917c-dc6842916963\") " pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.226457 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02427941-88ff-43f4-a73a-2048cf4b0e7c" path="/var/lib/kubelet/pods/02427941-88ff-43f4-a73a-2048cf4b0e7c/volumes" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.227140 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d506818-e4fd-46d6-a225-d1685dea3d6d" path="/var/lib/kubelet/pods/7d506818-e4fd-46d6-a225-d1685dea3d6d/volumes" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.227771 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d775db07-db5d-47fc-a76a-d70c5dda4aa4" path="/var/lib/kubelet/pods/d775db07-db5d-47fc-a76a-d70c5dda4aa4/volumes" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.299597 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.711244 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.786013 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b6496f477-9vvrm" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.790046 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79464686c6-vwq7l" Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.974470 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.975431 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6978559f7b-bwf92" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-log" containerID="cri-o://801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1" gracePeriod=30 Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.975853 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6978559f7b-bwf92" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-api" containerID="cri-o://e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44" gracePeriod=30 Feb 03 12:25:58 crc kubenswrapper[4679]: I0203 12:25:58.995717 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 12:25:59 crc kubenswrapper[4679]: W0203 12:25:59.001826 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4882a26d_4240_46b5_917c_dc6842916963.slice/crio-c8744d9a1baa35b6159c77828447bb5192cbb03e1ce90d9dc40ed3380d1e2e7d WatchSource:0}: Error finding container c8744d9a1baa35b6159c77828447bb5192cbb03e1ce90d9dc40ed3380d1e2e7d: Status 404 returned error can't find the container with id c8744d9a1baa35b6159c77828447bb5192cbb03e1ce90d9dc40ed3380d1e2e7d Feb 03 12:25:59 crc kubenswrapper[4679]: I0203 12:25:59.330790 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:25:59 crc kubenswrapper[4679]: I0203 12:25:59.643619 4679 generic.go:334] "Generic (PLEG): container finished" podID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerID="801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1" exitCode=143 Feb 03 12:25:59 crc kubenswrapper[4679]: I0203 12:25:59.643728 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerDied","Data":"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1"} Feb 03 12:25:59 crc kubenswrapper[4679]: I0203 12:25:59.649246 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4882a26d-4240-46b5-917c-dc6842916963","Type":"ContainerStarted","Data":"c8744d9a1baa35b6159c77828447bb5192cbb03e1ce90d9dc40ed3380d1e2e7d"} Feb 03 12:26:00 crc kubenswrapper[4679]: I0203 12:26:00.660469 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4882a26d-4240-46b5-917c-dc6842916963","Type":"ContainerStarted","Data":"27ac0621403002d77419776ed03b41ae29a54d76fd458fc1210feaac6ddd30d4"} Feb 03 12:26:00 crc kubenswrapper[4679]: I0203 12:26:00.768762 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52396->10.217.0.147:8443: read: connection reset by peer" Feb 03 12:26:01 crc kubenswrapper[4679]: I0203 12:26:01.560332 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 12:26:01 crc kubenswrapper[4679]: I0203 12:26:01.696379 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4882a26d-4240-46b5-917c-dc6842916963","Type":"ContainerStarted","Data":"ddcdab00ac70ea8adac9aebb13ff3c2418d7c3085117793aeb73f53953552f45"} Feb 03 12:26:01 crc kubenswrapper[4679]: I0203 12:26:01.723501 4679 generic.go:334] "Generic (PLEG): container finished" podID="d2c53de0-396a-4234-969c-65e4c2227710" containerID="8b50418522d858f152f94f7287764ae5b114c63d74924de8cc696f45b2e543fc" exitCode=0 Feb 03 12:26:01 crc kubenswrapper[4679]: I0203 12:26:01.723579 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerDied","Data":"8b50418522d858f152f94f7287764ae5b114c63d74924de8cc696f45b2e543fc"} Feb 03 12:26:01 crc kubenswrapper[4679]: I0203 12:26:01.744212 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.744183897 podStartE2EDuration="4.744183897s" podCreationTimestamp="2026-02-03 12:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:01.734655209 +0000 UTC m=+1234.209551317" watchObservedRunningTime="2026-02-03 12:26:01.744183897 +0000 UTC m=+1234.219079985" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.712043 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760127 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760231 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760397 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760537 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760586 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760673 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dgc6\" (UniqueName: \"kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.760701 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs\") pod \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\" (UID: \"2038b7f4-1671-44c7-bd2b-d5b21bc654fa\") " Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.761516 4679 generic.go:334] "Generic (PLEG): container finished" podID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerID="e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44" exitCode=0 Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.761690 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerDied","Data":"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44"} Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.761743 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6978559f7b-bwf92" event={"ID":"2038b7f4-1671-44c7-bd2b-d5b21bc654fa","Type":"ContainerDied","Data":"c40fe8a682144dd623168e0231595817581fab1c58a261b518773121d6eaac2d"} Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.761772 4679 scope.go:117] "RemoveContainer" containerID="e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.762014 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6978559f7b-bwf92" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.774418 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs" (OuterVolumeSpecName: "logs") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.784587 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts" (OuterVolumeSpecName: "scripts") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.801018 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6" (OuterVolumeSpecName: "kube-api-access-7dgc6") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "kube-api-access-7dgc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.811660 4679 generic.go:334] "Generic (PLEG): container finished" podID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerID="c2967118f4e04fe635e234e603378499b10810ba391817010ef28f04e388a11a" exitCode=0 Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.812562 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerDied","Data":"c2967118f4e04fe635e234e603378499b10810ba391817010ef28f04e388a11a"} Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.866939 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.866985 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dgc6\" (UniqueName: \"kubernetes.io/projected/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-kube-api-access-7dgc6\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.866999 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.910769 4679 scope.go:117] "RemoveContainer" containerID="801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.936132 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data" (OuterVolumeSpecName: "config-data") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.961378 4679 scope.go:117] "RemoveContainer" containerID="e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44" Feb 03 12:26:02 crc kubenswrapper[4679]: E0203 12:26:02.963597 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44\": container with ID starting with e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44 not found: ID does not exist" containerID="e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.963720 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44"} err="failed to get container status \"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44\": rpc error: code = NotFound desc = could not find container \"e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44\": container with ID starting with e809836e4d9ae48dd2700bf38a62674e6c71acfad697eb85c20fec1ce5f3da44 not found: ID does not exist" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.963842 4679 scope.go:117] "RemoveContainer" containerID="801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1" Feb 03 12:26:02 crc kubenswrapper[4679]: E0203 12:26:02.967555 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1\": container with ID starting with 801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1 not found: ID does not exist" containerID="801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.967623 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1"} err="failed to get container status \"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1\": rpc error: code = NotFound desc = could not find container \"801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1\": container with ID starting with 801bde51f9b32d0e76a067418ff7a8534d9db2c4829aa712230da2c0b0da9ec1 not found: ID does not exist" Feb 03 12:26:02 crc kubenswrapper[4679]: I0203 12:26:02.969618 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.016597 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.021906 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.030274 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2038b7f4-1671-44c7-bd2b-d5b21bc654fa" (UID: "2038b7f4-1671-44c7-bd2b-d5b21bc654fa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.071940 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.071992 4679 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.072005 4679 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2038b7f4-1671-44c7-bd2b-d5b21bc654fa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.128369 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.146576 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6978559f7b-bwf92"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.152116 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.173930 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.173987 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.174112 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.174276 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.174352 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.174462 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zsz4\" (UniqueName: \"kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.174534 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle\") pod \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\" (UID: \"796e124c-5af3-4e3b-b261-a5c7f0348bb1\") " Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.183536 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.184154 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4" (OuterVolumeSpecName: "kube-api-access-6zsz4") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "kube-api-access-6zsz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.267595 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.278109 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zsz4\" (UniqueName: \"kubernetes.io/projected/796e124c-5af3-4e3b-b261-a5c7f0348bb1-kube-api-access-6zsz4\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.278160 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.278170 4679 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.280775 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: E0203 12:26:03.281481 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-httpd" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281507 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-httpd" Feb 03 12:26:03 crc kubenswrapper[4679]: E0203 12:26:03.281550 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-api" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281558 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-api" Feb 03 12:26:03 crc kubenswrapper[4679]: E0203 12:26:03.281581 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-log" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281590 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-log" Feb 03 12:26:03 crc kubenswrapper[4679]: E0203 12:26:03.281607 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-api" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281614 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-api" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281830 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-api" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281850 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-api" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281860 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" containerName="placement-log" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.281881 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" containerName="neutron-httpd" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.282843 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.287557 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b46tq" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.300068 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.303964 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.306608 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.335481 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.381060 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvd59\" (UniqueName: \"kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.381186 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.381228 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.381323 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.387820 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.433226 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config" (OuterVolumeSpecName: "config") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.473101 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496186 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvd59\" (UniqueName: \"kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496328 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496394 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496500 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496583 4679 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496598 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.496611 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.497984 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.501729 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.503679 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "796e124c-5af3-4e3b-b261-a5c7f0348bb1" (UID: "796e124c-5af3-4e3b-b261-a5c7f0348bb1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.515739 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.528237 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvd59\" (UniqueName: \"kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59\") pod \"openstackclient\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.581737 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.599320 4679 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/796e124c-5af3-4e3b-b261-a5c7f0348bb1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.837793 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54f6c577-sh4k7" event={"ID":"796e124c-5af3-4e3b-b261-a5c7f0348bb1","Type":"ContainerDied","Data":"3d7ad5f7bea713e7388228d111108ed96571805545b07a0e83eaafcdf8311c5b"} Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.838404 4679 scope.go:117] "RemoveContainer" containerID="dc9a41aed40a2ec3da74c68eb6358cac759e6eff74b946a53bc543ef78a242f1" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.838656 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54f6c577-sh4k7" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.851650 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.882109 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.885211 4679 scope.go:117] "RemoveContainer" containerID="c2967118f4e04fe635e234e603378499b10810ba391817010ef28f04e388a11a" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.922482 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.924703 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.933138 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.950518 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:26:03 crc kubenswrapper[4679]: I0203 12:26:03.957289 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54f6c577-sh4k7"] Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.016771 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqtx\" (UniqueName: \"kubernetes.io/projected/9015531f-675f-40b4-a643-94a33a87592b-kube-api-access-4hqtx\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.016929 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9015531f-675f-40b4-a643-94a33a87592b-openstack-config\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.016963 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.017013 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.123621 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqtx\" (UniqueName: \"kubernetes.io/projected/9015531f-675f-40b4-a643-94a33a87592b-kube-api-access-4hqtx\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.125847 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9015531f-675f-40b4-a643-94a33a87592b-openstack-config\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.125883 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.125937 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.127743 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9015531f-675f-40b4-a643-94a33a87592b-openstack-config\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.135322 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.141444 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9015531f-675f-40b4-a643-94a33a87592b-openstack-config-secret\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.145691 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqtx\" (UniqueName: \"kubernetes.io/projected/9015531f-675f-40b4-a643-94a33a87592b-kube-api-access-4hqtx\") pod \"openstackclient\" (UID: \"9015531f-675f-40b4-a643-94a33a87592b\") " pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.230031 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2038b7f4-1671-44c7-bd2b-d5b21bc654fa" path="/var/lib/kubelet/pods/2038b7f4-1671-44c7-bd2b-d5b21bc654fa/volumes" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.231107 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796e124c-5af3-4e3b-b261-a5c7f0348bb1" path="/var/lib/kubelet/pods/796e124c-5af3-4e3b-b261-a5c7f0348bb1/volumes" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.272716 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: E0203 12:26:04.319616 4679 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 03 12:26:04 crc kubenswrapper[4679]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2ce1aeab-a86d-4cca-a7cc-2440d4506579_0(5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7" Netns:"/var/run/netns/05418a33-813e-479d-aeae-e74eb9b078a1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7;K8S_POD_UID=2ce1aeab-a86d-4cca-a7cc-2440d4506579" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/2ce1aeab-a86d-4cca-a7cc-2440d4506579:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7 network default NAD default] [openstack/openstackclient 5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:aa [10.217.0.170/23] Feb 03 12:26:04 crc kubenswrapper[4679]: ' Feb 03 12:26:04 crc kubenswrapper[4679]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 12:26:04 crc kubenswrapper[4679]: > Feb 03 12:26:04 crc kubenswrapper[4679]: E0203 12:26:04.319737 4679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 03 12:26:04 crc kubenswrapper[4679]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_2ce1aeab-a86d-4cca-a7cc-2440d4506579_0(5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7" Netns:"/var/run/netns/05418a33-813e-479d-aeae-e74eb9b078a1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7;K8S_POD_UID=2ce1aeab-a86d-4cca-a7cc-2440d4506579" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/2ce1aeab-a86d-4cca-a7cc-2440d4506579:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7 network default NAD default] [openstack/openstackclient 5460bc1a32eb03ddbd10b05a24d2fce688fe72b3fc3755634de7c9271e8935f7 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:aa [10.217.0.170/23] Feb 03 12:26:04 crc kubenswrapper[4679]: ' Feb 03 12:26:04 crc kubenswrapper[4679]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 12:26:04 crc kubenswrapper[4679]: > pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.839854 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 12:26:04 crc kubenswrapper[4679]: W0203 12:26:04.843893 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9015531f_675f_40b4_a643_94a33a87592b.slice/crio-bbfa75db1ccd77cbf82d16efa94818abaf38f4cd5776febcfd1656ce8f72bc34 WatchSource:0}: Error finding container bbfa75db1ccd77cbf82d16efa94818abaf38f4cd5776febcfd1656ce8f72bc34: Status 404 returned error can't find the container with id bbfa75db1ccd77cbf82d16efa94818abaf38f4cd5776febcfd1656ce8f72bc34 Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.870829 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.870812 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9015531f-675f-40b4-a643-94a33a87592b","Type":"ContainerStarted","Data":"bbfa75db1ccd77cbf82d16efa94818abaf38f4cd5776febcfd1656ce8f72bc34"} Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.887431 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.891620 4679 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2ce1aeab-a86d-4cca-a7cc-2440d4506579" podUID="9015531f-675f-40b4-a643-94a33a87592b" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.947153 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvd59\" (UniqueName: \"kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59\") pod \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.947284 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret\") pod \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.947511 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config\") pod \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.947576 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle\") pod \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\" (UID: \"2ce1aeab-a86d-4cca-a7cc-2440d4506579\") " Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.948774 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2ce1aeab-a86d-4cca-a7cc-2440d4506579" (UID: "2ce1aeab-a86d-4cca-a7cc-2440d4506579"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.955131 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce1aeab-a86d-4cca-a7cc-2440d4506579" (UID: "2ce1aeab-a86d-4cca-a7cc-2440d4506579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.958200 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59" (OuterVolumeSpecName: "kube-api-access-cvd59") pod "2ce1aeab-a86d-4cca-a7cc-2440d4506579" (UID: "2ce1aeab-a86d-4cca-a7cc-2440d4506579"). InnerVolumeSpecName "kube-api-access-cvd59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:04 crc kubenswrapper[4679]: I0203 12:26:04.964171 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2ce1aeab-a86d-4cca-a7cc-2440d4506579" (UID: "2ce1aeab-a86d-4cca-a7cc-2440d4506579"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.050992 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.051035 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.051046 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvd59\" (UniqueName: \"kubernetes.io/projected/2ce1aeab-a86d-4cca-a7cc-2440d4506579-kube-api-access-cvd59\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.051059 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ce1aeab-a86d-4cca-a7cc-2440d4506579-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.090617 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-56b6b9b667-hn9mj"] Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.095809 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.101339 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.101725 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.103863 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.113629 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56b6b9b667-hn9mj"] Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.153852 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-combined-ca-bundle\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.154224 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-etc-swift\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.154379 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-config-data\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.154589 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-run-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.154767 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-internal-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.154952 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-log-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.155077 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-public-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.155186 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tx5p\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-kube-api-access-6tx5p\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257543 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-public-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257601 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tx5p\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-kube-api-access-6tx5p\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257680 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-combined-ca-bundle\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257705 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-etc-swift\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257731 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-config-data\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257789 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-run-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257814 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-internal-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.257850 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-log-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.258592 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-log-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.258839 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7034878f-0540-438b-b9b3-5e726c04e49c-run-httpd\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.263889 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-config-data\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.264401 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-etc-swift\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.265203 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-public-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.266418 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-internal-tls-certs\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.270625 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7034878f-0540-438b-b9b3-5e726c04e49c-combined-ca-bundle\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.278642 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tx5p\" (UniqueName: \"kubernetes.io/projected/7034878f-0540-438b-b9b3-5e726c04e49c-kube-api-access-6tx5p\") pod \"swift-proxy-56b6b9b667-hn9mj\" (UID: \"7034878f-0540-438b-b9b3-5e726c04e49c\") " pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.418126 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.883771 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 12:26:05 crc kubenswrapper[4679]: I0203 12:26:05.904493 4679 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2ce1aeab-a86d-4cca-a7cc-2440d4506579" podUID="9015531f-675f-40b4-a643-94a33a87592b" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.114512 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56b6b9b667-hn9mj"] Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.239668 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce1aeab-a86d-4cca-a7cc-2440d4506579" path="/var/lib/kubelet/pods/2ce1aeab-a86d-4cca-a7cc-2440d4506579/volumes" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.736026 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.736558 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.736620 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.737647 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.737721 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95" gracePeriod=600 Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.992103 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56b6b9b667-hn9mj" event={"ID":"7034878f-0540-438b-b9b3-5e726c04e49c","Type":"ContainerStarted","Data":"c908e054b705d427113637cf18c67cd40d363c7405995f554bfe0401923ffae1"} Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.992537 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56b6b9b667-hn9mj" event={"ID":"7034878f-0540-438b-b9b3-5e726c04e49c","Type":"ContainerStarted","Data":"a1b33bd5520c8ddbd4f000fa6682bf590693010f314c5141406abc545a17ea83"} Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.992547 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56b6b9b667-hn9mj" event={"ID":"7034878f-0540-438b-b9b3-5e726c04e49c","Type":"ContainerStarted","Data":"96a215e2883ebd26e1caf6628338d69d15baf5301a306f680131b5de504217b6"} Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.993662 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:06 crc kubenswrapper[4679]: I0203 12:26:06.993685 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:07 crc kubenswrapper[4679]: I0203 12:26:07.035017 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-56b6b9b667-hn9mj" podStartSLOduration=2.034986252 podStartE2EDuration="2.034986252s" podCreationTimestamp="2026-02-03 12:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:07.019982972 +0000 UTC m=+1239.494879060" watchObservedRunningTime="2026-02-03 12:26:07.034986252 +0000 UTC m=+1239.509882340" Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.016571 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95" exitCode=0 Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.016670 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95"} Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.017562 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60"} Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.017591 4679 scope.go:117] "RemoveContainer" containerID="8c7f7c8aab7624469328d1de64c08b453b25a5c763215ebe97829d21aefac1e6" Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.149215 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.149585 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-central-agent" containerID="cri-o://ef333dfd8ffb3353933bc6f48b449bb91a9b778fc7dbdabd3b777b6b3ac6b958" gracePeriod=30 Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.150641 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="proxy-httpd" containerID="cri-o://d5562ced629fb4395f0c3713b3687397a23931c3b43470db26fa4057d82e6b49" gracePeriod=30 Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.150702 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="sg-core" containerID="cri-o://3f2e5de3c76b494383c477d2712a4bd28c70ae0c518a0fc7e645aa099fcecb8e" gracePeriod=30 Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.150745 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-notification-agent" containerID="cri-o://d125ca68c1b41f8a3e7cec808f9671b075ae7c419ea999fc8c34604964525143" gracePeriod=30 Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.166278 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.536410 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 12:26:08 crc kubenswrapper[4679]: I0203 12:26:08.651811 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.038401 4679 generic.go:334] "Generic (PLEG): container finished" podID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerID="d5562ced629fb4395f0c3713b3687397a23931c3b43470db26fa4057d82e6b49" exitCode=0 Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.038787 4679 generic.go:334] "Generic (PLEG): container finished" podID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerID="3f2e5de3c76b494383c477d2712a4bd28c70ae0c518a0fc7e645aa099fcecb8e" exitCode=2 Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.038800 4679 generic.go:334] "Generic (PLEG): container finished" podID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerID="ef333dfd8ffb3353933bc6f48b449bb91a9b778fc7dbdabd3b777b6b3ac6b958" exitCode=0 Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.039211 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerDied","Data":"d5562ced629fb4395f0c3713b3687397a23931c3b43470db26fa4057d82e6b49"} Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.039329 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerDied","Data":"3f2e5de3c76b494383c477d2712a4bd28c70ae0c518a0fc7e645aa099fcecb8e"} Feb 03 12:26:09 crc kubenswrapper[4679]: I0203 12:26:09.039354 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerDied","Data":"ef333dfd8ffb3353933bc6f48b449bb91a9b778fc7dbdabd3b777b6b3ac6b958"} Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.475710 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6cwkq"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.477824 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.516533 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6cwkq"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.525230 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qt5c\" (UniqueName: \"kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.525427 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.573413 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vmpqh"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.575057 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.594460 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-90d7-account-create-update-zdz8k"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.596173 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.605965 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627563 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627636 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627708 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6fr\" (UniqueName: \"kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627773 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qt5c\" (UniqueName: \"kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627795 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tlf\" (UniqueName: \"kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.627860 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.629751 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.633539 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vmpqh"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.675468 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-90d7-account-create-update-zdz8k"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.686279 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qt5c\" (UniqueName: \"kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c\") pod \"nova-api-db-create-6cwkq\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.729155 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.729709 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6fr\" (UniqueName: \"kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.729801 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tlf\" (UniqueName: \"kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.729890 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.730458 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.730893 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.734973 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jwvqh"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.739469 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.758067 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tlf\" (UniqueName: \"kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf\") pod \"nova-api-90d7-account-create-update-zdz8k\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.764404 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6fr\" (UniqueName: \"kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr\") pod \"nova-cell0-db-create-vmpqh\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.765550 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jwvqh"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.811279 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.849095 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-73aa-account-create-update-pmdvr"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.851082 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.854868 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.885438 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-73aa-account-create-update-pmdvr"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.901835 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.926714 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.940093 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.940285 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvcq\" (UniqueName: \"kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.956754 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ec45-account-create-update-4kxqf"] Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.958415 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.972871 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 12:26:10 crc kubenswrapper[4679]: I0203 12:26:10.976686 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ec45-account-create-update-4kxqf"] Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.042468 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.042544 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqbw\" (UniqueName: \"kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.042620 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.042679 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvcq\" (UniqueName: \"kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.044935 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.082917 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvcq\" (UniqueName: \"kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq\") pod \"nova-cell1-db-create-jwvqh\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.146350 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv998\" (UniqueName: \"kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.146453 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqbw\" (UniqueName: \"kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.146979 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.147024 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.147912 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.164825 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.170096 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqbw\" (UniqueName: \"kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw\") pod \"nova-cell0-73aa-account-create-update-pmdvr\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.248885 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv998\" (UniqueName: \"kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.248989 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.249790 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.258090 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.275506 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv998\" (UniqueName: \"kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998\") pod \"nova-cell1-ec45-account-create-update-4kxqf\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.283527 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.478186 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6cwkq"] Feb 03 12:26:11 crc kubenswrapper[4679]: W0203 12:26:11.501176 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec0ba804_303f_44b9_8ba0_68278fee0f17.slice/crio-8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249 WatchSource:0}: Error finding container 8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249: Status 404 returned error can't find the container with id 8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249 Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.740247 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-90d7-account-create-update-zdz8k"] Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.821152 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vmpqh"] Feb 03 12:26:11 crc kubenswrapper[4679]: I0203 12:26:11.907239 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jwvqh"] Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.088275 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwvqh" event={"ID":"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e","Type":"ContainerStarted","Data":"315ab3fcb3b0db7a59fed354ff969e26318228e213999b5c9ad63d7a89fa035b"} Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.091259 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90d7-account-create-update-zdz8k" event={"ID":"e0696ddd-2b30-4e81-954c-9219fa89b5f8","Type":"ContainerStarted","Data":"c92476bbfcf2806fea4a15d5d6af8fca0926311decb7ba26450f3a111ede8fb2"} Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.094314 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6cwkq" event={"ID":"ec0ba804-303f-44b9-8ba0-68278fee0f17","Type":"ContainerStarted","Data":"7bc65ae5f2391d7814e5acedf8737a5f2d7a056e609920b0206a3512091dd5a5"} Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.094418 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6cwkq" event={"ID":"ec0ba804-303f-44b9-8ba0-68278fee0f17","Type":"ContainerStarted","Data":"8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249"} Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.096333 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vmpqh" event={"ID":"ac3d52df-3f8a-4ba1-97eb-889f68e40cae","Type":"ContainerStarted","Data":"2383240f6b0f4dd71934cc582778d852af1e8e331277a954db8488726c2f6898"} Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.101968 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ec45-account-create-update-4kxqf"] Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.119416 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-73aa-account-create-update-pmdvr"] Feb 03 12:26:12 crc kubenswrapper[4679]: I0203 12:26:12.123885 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-6cwkq" podStartSLOduration=2.123857811 podStartE2EDuration="2.123857811s" podCreationTimestamp="2026-02-03 12:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:12.117904446 +0000 UTC m=+1244.592800534" watchObservedRunningTime="2026-02-03 12:26:12.123857811 +0000 UTC m=+1244.598753899" Feb 03 12:26:12 crc kubenswrapper[4679]: W0203 12:26:12.127932 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27c7b843_97ec_45e7_b87a_cff6549aee8a.slice/crio-c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41 WatchSource:0}: Error finding container c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41: Status 404 returned error can't find the container with id c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41 Feb 03 12:26:12 crc kubenswrapper[4679]: W0203 12:26:12.134522 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60aa7052_469c_4202_83c1_780e52588e83.slice/crio-8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723 WatchSource:0}: Error finding container 8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723: Status 404 returned error can't find the container with id 8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.118526 4679 generic.go:334] "Generic (PLEG): container finished" podID="bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" containerID="d34823c3d502d9c677b71c45f4cba003d692b8498dc02716f3cbf5d71c0a2fd7" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.119375 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwvqh" event={"ID":"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e","Type":"ContainerDied","Data":"d34823c3d502d9c677b71c45f4cba003d692b8498dc02716f3cbf5d71c0a2fd7"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.125374 4679 generic.go:334] "Generic (PLEG): container finished" podID="e0696ddd-2b30-4e81-954c-9219fa89b5f8" containerID="b34d0695bbd585feab8d19f495117500923fc21dc4cdf9f375e81f4f0c34ad44" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.125603 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90d7-account-create-update-zdz8k" event={"ID":"e0696ddd-2b30-4e81-954c-9219fa89b5f8","Type":"ContainerDied","Data":"b34d0695bbd585feab8d19f495117500923fc21dc4cdf9f375e81f4f0c34ad44"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.131634 4679 generic.go:334] "Generic (PLEG): container finished" podID="ec0ba804-303f-44b9-8ba0-68278fee0f17" containerID="7bc65ae5f2391d7814e5acedf8737a5f2d7a056e609920b0206a3512091dd5a5" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.131891 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6cwkq" event={"ID":"ec0ba804-303f-44b9-8ba0-68278fee0f17","Type":"ContainerDied","Data":"7bc65ae5f2391d7814e5acedf8737a5f2d7a056e609920b0206a3512091dd5a5"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.137962 4679 generic.go:334] "Generic (PLEG): container finished" podID="ac3d52df-3f8a-4ba1-97eb-889f68e40cae" containerID="1e1dc7ac98104483e7b2d3ebaf36bfcd1b236b7ecda820ceb9a97929d66f4c76" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.138244 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vmpqh" event={"ID":"ac3d52df-3f8a-4ba1-97eb-889f68e40cae","Type":"ContainerDied","Data":"1e1dc7ac98104483e7b2d3ebaf36bfcd1b236b7ecda820ceb9a97929d66f4c76"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.141419 4679 generic.go:334] "Generic (PLEG): container finished" podID="27c7b843-97ec-45e7-b87a-cff6549aee8a" containerID="cc5a8b11a553d985dbdc1e2beaf773dfd98276809f0fe0b93032f18290358f01" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.141577 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" event={"ID":"27c7b843-97ec-45e7-b87a-cff6549aee8a","Type":"ContainerDied","Data":"cc5a8b11a553d985dbdc1e2beaf773dfd98276809f0fe0b93032f18290358f01"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.141610 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" event={"ID":"27c7b843-97ec-45e7-b87a-cff6549aee8a","Type":"ContainerStarted","Data":"c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.152406 4679 generic.go:334] "Generic (PLEG): container finished" podID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerID="d125ca68c1b41f8a3e7cec808f9671b075ae7c419ea999fc8c34604964525143" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.152632 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerDied","Data":"d125ca68c1b41f8a3e7cec808f9671b075ae7c419ea999fc8c34604964525143"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.167077 4679 generic.go:334] "Generic (PLEG): container finished" podID="60aa7052-469c-4202-83c1-780e52588e83" containerID="625a09bdfac9c2afdfdaf0455998fdc19b93c8c88b50d2b4024909116e592bb6" exitCode=0 Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.167135 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" event={"ID":"60aa7052-469c-4202-83c1-780e52588e83","Type":"ContainerDied","Data":"625a09bdfac9c2afdfdaf0455998fdc19b93c8c88b50d2b4024909116e592bb6"} Feb 03 12:26:13 crc kubenswrapper[4679]: I0203 12:26:13.167168 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" event={"ID":"60aa7052-469c-4202-83c1-780e52588e83","Type":"ContainerStarted","Data":"8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723"} Feb 03 12:26:14 crc kubenswrapper[4679]: I0203 12:26:14.351960 4679 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc316e952-31f5-42ef-bd28-f09412dd0118"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc316e952-31f5-42ef-bd28-f09412dd0118] : Timed out while waiting for systemd to remove kubepods-besteffort-podc316e952_31f5_42ef_bd28_f09412dd0118.slice" Feb 03 12:26:14 crc kubenswrapper[4679]: E0203 12:26:14.352563 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc316e952-31f5-42ef-bd28-f09412dd0118] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc316e952-31f5-42ef-bd28-f09412dd0118] : Timed out while waiting for systemd to remove kubepods-besteffort-podc316e952_31f5_42ef_bd28_f09412dd0118.slice" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" Feb 03 12:26:15 crc kubenswrapper[4679]: I0203 12:26:15.228242 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-7fxst" Feb 03 12:26:15 crc kubenswrapper[4679]: I0203 12:26:15.277293 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:26:15 crc kubenswrapper[4679]: I0203 12:26:15.289874 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-7fxst"] Feb 03 12:26:15 crc kubenswrapper[4679]: I0203 12:26:15.427995 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:15 crc kubenswrapper[4679]: I0203 12:26:15.429576 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56b6b9b667-hn9mj" Feb 03 12:26:16 crc kubenswrapper[4679]: I0203 12:26:16.227154 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c316e952-31f5-42ef-bd28-f09412dd0118" path="/var/lib/kubelet/pods/c316e952-31f5-42ef-bd28-f09412dd0118/volumes" Feb 03 12:26:18 crc kubenswrapper[4679]: I0203 12:26:18.536738 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-755ddc4dc6-5tjzs" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.291498 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.294532 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7","Type":"ContainerDied","Data":"ad8653712dee83539ad3960d82334df6cda2f0974c605175e22f3082355f6ed1"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.294710 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8653712dee83539ad3960d82334df6cda2f0974c605175e22f3082355f6ed1" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.297178 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" event={"ID":"60aa7052-469c-4202-83c1-780e52588e83","Type":"ContainerDied","Data":"8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.297228 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8751b314084dfe97c0301a95baf530ab4365b67e42d8f6c0d26c04f15d60e723" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.297376 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-73aa-account-create-update-pmdvr" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.302580 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwvqh" event={"ID":"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e","Type":"ContainerDied","Data":"315ab3fcb3b0db7a59fed354ff969e26318228e213999b5c9ad63d7a89fa035b"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.302645 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="315ab3fcb3b0db7a59fed354ff969e26318228e213999b5c9ad63d7a89fa035b" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.308692 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-90d7-account-create-update-zdz8k" event={"ID":"e0696ddd-2b30-4e81-954c-9219fa89b5f8","Type":"ContainerDied","Data":"c92476bbfcf2806fea4a15d5d6af8fca0926311decb7ba26450f3a111ede8fb2"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.309011 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92476bbfcf2806fea4a15d5d6af8fca0926311decb7ba26450f3a111ede8fb2" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.327037 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6cwkq" event={"ID":"ec0ba804-303f-44b9-8ba0-68278fee0f17","Type":"ContainerDied","Data":"8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.327129 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da52134525f939823ea7b11bbc13cf310fc82e02f93870d0b7c8f8174fe5249" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.331377 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vmpqh" event={"ID":"ac3d52df-3f8a-4ba1-97eb-889f68e40cae","Type":"ContainerDied","Data":"2383240f6b0f4dd71934cc582778d852af1e8e331277a954db8488726c2f6898"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.331439 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2383240f6b0f4dd71934cc582778d852af1e8e331277a954db8488726c2f6898" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.333337 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" event={"ID":"27c7b843-97ec-45e7-b87a-cff6549aee8a","Type":"ContainerDied","Data":"c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41"} Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.333397 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f4c9383cd6a64d2dca5ba2ff5dcb8009b16fabb5dce38e059f25425b6f8d41" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.351722 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.374792 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.386984 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.423278 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.424225 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.455218 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmqbw\" (UniqueName: \"kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw\") pod \"60aa7052-469c-4202-83c1-780e52588e83\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.455795 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts\") pod \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.456008 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6fr\" (UniqueName: \"kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr\") pod \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\" (UID: \"ac3d52df-3f8a-4ba1-97eb-889f68e40cae\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.456114 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts\") pod \"60aa7052-469c-4202-83c1-780e52588e83\" (UID: \"60aa7052-469c-4202-83c1-780e52588e83\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.457161 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac3d52df-3f8a-4ba1-97eb-889f68e40cae" (UID: "ac3d52df-3f8a-4ba1-97eb-889f68e40cae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.459276 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60aa7052-469c-4202-83c1-780e52588e83" (UID: "60aa7052-469c-4202-83c1-780e52588e83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.459734 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.492750 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw" (OuterVolumeSpecName: "kube-api-access-nmqbw") pod "60aa7052-469c-4202-83c1-780e52588e83" (UID: "60aa7052-469c-4202-83c1-780e52588e83"). InnerVolumeSpecName "kube-api-access-nmqbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.499874 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr" (OuterVolumeSpecName: "kube-api-access-cd6fr") pod "ac3d52df-3f8a-4ba1-97eb-889f68e40cae" (UID: "ac3d52df-3f8a-4ba1-97eb-889f68e40cae"). InnerVolumeSpecName "kube-api-access-cd6fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558180 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558251 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts\") pod \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558311 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558375 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr7bl\" (UniqueName: \"kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558444 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558508 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558868 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qt5c\" (UniqueName: \"kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c\") pod \"ec0ba804-303f-44b9-8ba0-68278fee0f17\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558924 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv998\" (UniqueName: \"kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998\") pod \"27c7b843-97ec-45e7-b87a-cff6549aee8a\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558952 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvvcq\" (UniqueName: \"kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq\") pod \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.558986 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts\") pod \"ec0ba804-303f-44b9-8ba0-68278fee0f17\" (UID: \"ec0ba804-303f-44b9-8ba0-68278fee0f17\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559033 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts\") pod \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\" (UID: \"bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559079 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559111 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29tlf\" (UniqueName: \"kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf\") pod \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\" (UID: \"e0696ddd-2b30-4e81-954c-9219fa89b5f8\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559161 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle\") pod \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\" (UID: \"2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559197 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts\") pod \"27c7b843-97ec-45e7-b87a-cff6549aee8a\" (UID: \"27c7b843-97ec-45e7-b87a-cff6549aee8a\") " Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559740 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559753 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6fr\" (UniqueName: \"kubernetes.io/projected/ac3d52df-3f8a-4ba1-97eb-889f68e40cae-kube-api-access-cd6fr\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559765 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60aa7052-469c-4202-83c1-780e52588e83-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.559774 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmqbw\" (UniqueName: \"kubernetes.io/projected/60aa7052-469c-4202-83c1-780e52588e83-kube-api-access-nmqbw\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.560409 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c7b843-97ec-45e7-b87a-cff6549aee8a" (UID: "27c7b843-97ec-45e7-b87a-cff6549aee8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.564097 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.564551 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.564689 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec0ba804-303f-44b9-8ba0-68278fee0f17" (UID: "ec0ba804-303f-44b9-8ba0-68278fee0f17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.564994 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0696ddd-2b30-4e81-954c-9219fa89b5f8" (UID: "e0696ddd-2b30-4e81-954c-9219fa89b5f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.565197 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" (UID: "bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.575585 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl" (OuterVolumeSpecName: "kube-api-access-sr7bl") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "kube-api-access-sr7bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.580422 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf" (OuterVolumeSpecName: "kube-api-access-29tlf") pod "e0696ddd-2b30-4e81-954c-9219fa89b5f8" (UID: "e0696ddd-2b30-4e81-954c-9219fa89b5f8"). InnerVolumeSpecName "kube-api-access-29tlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.582616 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts" (OuterVolumeSpecName: "scripts") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.586764 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq" (OuterVolumeSpecName: "kube-api-access-wvvcq") pod "bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" (UID: "bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e"). InnerVolumeSpecName "kube-api-access-wvvcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.590642 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c" (OuterVolumeSpecName: "kube-api-access-6qt5c") pod "ec0ba804-303f-44b9-8ba0-68278fee0f17" (UID: "ec0ba804-303f-44b9-8ba0-68278fee0f17"). InnerVolumeSpecName "kube-api-access-6qt5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.623059 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998" (OuterVolumeSpecName: "kube-api-access-hv998") pod "27c7b843-97ec-45e7-b87a-cff6549aee8a" (UID: "27c7b843-97ec-45e7-b87a-cff6549aee8a"). InnerVolumeSpecName "kube-api-access-hv998". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.663772 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qt5c\" (UniqueName: \"kubernetes.io/projected/ec0ba804-303f-44b9-8ba0-68278fee0f17-kube-api-access-6qt5c\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.669628 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv998\" (UniqueName: \"kubernetes.io/projected/27c7b843-97ec-45e7-b87a-cff6549aee8a-kube-api-access-hv998\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.669920 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvvcq\" (UniqueName: \"kubernetes.io/projected/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-kube-api-access-wvvcq\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670006 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec0ba804-303f-44b9-8ba0-68278fee0f17-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670098 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670174 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670256 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29tlf\" (UniqueName: \"kubernetes.io/projected/e0696ddd-2b30-4e81-954c-9219fa89b5f8-kube-api-access-29tlf\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670334 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c7b843-97ec-45e7-b87a-cff6549aee8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670453 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670543 4679 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0696ddd-2b30-4e81-954c-9219fa89b5f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670616 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr7bl\" (UniqueName: \"kubernetes.io/projected/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-kube-api-access-sr7bl\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.670688 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.699541 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.773135 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.777526 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.826204 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data" (OuterVolumeSpecName: "config-data") pod "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" (UID: "2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.875522 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:19 crc kubenswrapper[4679]: I0203 12:26:19.875563 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.349188 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9015531f-675f-40b4-a643-94a33a87592b","Type":"ContainerStarted","Data":"5e595d1f60255aa29f87bb07e943466aeb4c77107f604d060b8b605cc2b6f001"} Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.349268 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.349289 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ec45-account-create-update-4kxqf" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.350199 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-90d7-account-create-update-zdz8k" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.350310 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vmpqh" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.350397 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6cwkq" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.350806 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwvqh" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.386734 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.051302313 podStartE2EDuration="17.386702804s" podCreationTimestamp="2026-02-03 12:26:03 +0000 UTC" firstStartedPulling="2026-02-03 12:26:04.847701887 +0000 UTC m=+1237.322597975" lastFinishedPulling="2026-02-03 12:26:19.183102348 +0000 UTC m=+1251.657998466" observedRunningTime="2026-02-03 12:26:20.376638842 +0000 UTC m=+1252.851534930" watchObservedRunningTime="2026-02-03 12:26:20.386702804 +0000 UTC m=+1252.861598902" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.422482 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.449119 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.502085 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503533 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60aa7052-469c-4202-83c1-780e52588e83" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503559 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="60aa7052-469c-4202-83c1-780e52588e83" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503596 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-central-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503605 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-central-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503644 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c7b843-97ec-45e7-b87a-cff6549aee8a" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503653 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c7b843-97ec-45e7-b87a-cff6549aee8a" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503663 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="sg-core" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503670 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="sg-core" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503693 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0ba804-303f-44b9-8ba0-68278fee0f17" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503700 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0ba804-303f-44b9-8ba0-68278fee0f17" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503717 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-notification-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503726 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-notification-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503744 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3d52df-3f8a-4ba1-97eb-889f68e40cae" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503750 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3d52df-3f8a-4ba1-97eb-889f68e40cae" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503767 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="proxy-httpd" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503773 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="proxy-httpd" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503788 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0696ddd-2b30-4e81-954c-9219fa89b5f8" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503795 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0696ddd-2b30-4e81-954c-9219fa89b5f8" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: E0203 12:26:20.503816 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.503821 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504147 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="sg-core" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504166 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504181 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3d52df-3f8a-4ba1-97eb-889f68e40cae" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504211 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="60aa7052-469c-4202-83c1-780e52588e83" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504225 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="proxy-httpd" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504237 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-notification-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504247 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c7b843-97ec-45e7-b87a-cff6549aee8a" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504258 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0696ddd-2b30-4e81-954c-9219fa89b5f8" containerName="mariadb-account-create-update" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504278 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0ba804-303f-44b9-8ba0-68278fee0f17" containerName="mariadb-database-create" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.504298 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="ceilometer-central-agent" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.510074 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.518340 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.519826 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.536345 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601021 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601093 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601144 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601167 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601188 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601227 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7hn\" (UniqueName: \"kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.601272 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704210 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704294 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704400 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704432 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704461 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704513 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx7hn\" (UniqueName: \"kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.704580 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.706111 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.706464 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.710337 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.710678 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.711117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.727898 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.730754 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx7hn\" (UniqueName: \"kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn\") pod \"ceilometer-0\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " pod="openstack/ceilometer-0" Feb 03 12:26:20 crc kubenswrapper[4679]: I0203 12:26:20.850599 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.261132 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bx7p9"] Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.263405 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.267859 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.269261 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.269582 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-drjl6" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.280232 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bx7p9"] Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.386904 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.426451 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.426511 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh4k\" (UniqueName: \"kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.426569 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.426691 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.528826 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.529002 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.529027 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh4k\" (UniqueName: \"kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.529049 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.536925 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.544858 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.545870 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.545919 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh4k\" (UniqueName: \"kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k\") pod \"nova-cell0-conductor-db-sync-bx7p9\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:21 crc kubenswrapper[4679]: I0203 12:26:21.593600 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:22 crc kubenswrapper[4679]: I0203 12:26:22.070913 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bx7p9"] Feb 03 12:26:22 crc kubenswrapper[4679]: I0203 12:26:22.224436 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" path="/var/lib/kubelet/pods/2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7/volumes" Feb 03 12:26:22 crc kubenswrapper[4679]: I0203 12:26:22.373081 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerStarted","Data":"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b"} Feb 03 12:26:22 crc kubenswrapper[4679]: I0203 12:26:22.373193 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerStarted","Data":"2eafe35e04719f543d5f44365632b746234a280c5d75ddc893187c411989ed3e"} Feb 03 12:26:22 crc kubenswrapper[4679]: I0203 12:26:22.375434 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" event={"ID":"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5","Type":"ContainerStarted","Data":"12d5c7aa3b5e9176b3f2620d7d51f14c2b44ff6488f428a48642b953732fb723"} Feb 03 12:26:23 crc kubenswrapper[4679]: I0203 12:26:23.400187 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerStarted","Data":"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6"} Feb 03 12:26:24 crc kubenswrapper[4679]: I0203 12:26:24.417395 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerDied","Data":"3d45b94948bb9d6ae9e9251381d37c498a102b8b8646ee693a3ee3f1edcbb7f5"} Feb 03 12:26:24 crc kubenswrapper[4679]: I0203 12:26:24.417421 4679 generic.go:334] "Generic (PLEG): container finished" podID="d2c53de0-396a-4234-969c-65e4c2227710" containerID="3d45b94948bb9d6ae9e9251381d37c498a102b8b8646ee693a3ee3f1edcbb7f5" exitCode=137 Feb 03 12:26:24 crc kubenswrapper[4679]: I0203 12:26:24.991572 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.114974 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.115132 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7cft\" (UniqueName: \"kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.115212 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.115419 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.115998 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.116100 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.116733 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts\") pod \"d2c53de0-396a-4234-969c-65e4c2227710\" (UID: \"d2c53de0-396a-4234-969c-65e4c2227710\") " Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.119808 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs" (OuterVolumeSpecName: "logs") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.125534 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.125735 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft" (OuterVolumeSpecName: "kube-api-access-l7cft") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "kube-api-access-l7cft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.150951 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.156003 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data" (OuterVolumeSpecName: "config-data") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.159560 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts" (OuterVolumeSpecName: "scripts") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.183512 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d2c53de0-396a-4234-969c-65e4c2227710" (UID: "d2c53de0-396a-4234-969c-65e4c2227710"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219647 4679 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219686 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219698 4679 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c53de0-396a-4234-969c-65e4c2227710-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219708 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c53de0-396a-4234-969c-65e4c2227710-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219720 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219729 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c53de0-396a-4234-969c-65e4c2227710-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.219737 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7cft\" (UniqueName: \"kubernetes.io/projected/d2c53de0-396a-4234-969c-65e4c2227710-kube-api-access-l7cft\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.432308 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-755ddc4dc6-5tjzs" event={"ID":"d2c53de0-396a-4234-969c-65e4c2227710","Type":"ContainerDied","Data":"79a264c94f2f251f1abc48d4ff5066694877bd39be0a421377256bc938b793f5"} Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.432384 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-755ddc4dc6-5tjzs" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.432448 4679 scope.go:117] "RemoveContainer" containerID="8b50418522d858f152f94f7287764ae5b114c63d74924de8cc696f45b2e543fc" Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.457321 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerStarted","Data":"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27"} Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.520126 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.534203 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-755ddc4dc6-5tjzs"] Feb 03 12:26:25 crc kubenswrapper[4679]: I0203 12:26:25.704985 4679 scope.go:117] "RemoveContainer" containerID="3d45b94948bb9d6ae9e9251381d37c498a102b8b8646ee693a3ee3f1edcbb7f5" Feb 03 12:26:26 crc kubenswrapper[4679]: I0203 12:26:26.233068 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c53de0-396a-4234-969c-65e4c2227710" path="/var/lib/kubelet/pods/d2c53de0-396a-4234-969c-65e4c2227710/volumes" Feb 03 12:26:26 crc kubenswrapper[4679]: I0203 12:26:26.513906 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.581046 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerStarted","Data":"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c"} Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.582841 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.581312 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="proxy-httpd" containerID="cri-o://18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c" gracePeriod=30 Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.583017 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" event={"ID":"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5","Type":"ContainerStarted","Data":"ca3a6340e71b1870165d2f67e1c402614fcd2a2305a8a088f5e06da07095f91f"} Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.581545 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-notification-agent" containerID="cri-o://e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6" gracePeriod=30 Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.581538 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="sg-core" containerID="cri-o://7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27" gracePeriod=30 Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.581241 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-central-agent" containerID="cri-o://43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b" gracePeriod=30 Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.620130 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.338734763 podStartE2EDuration="12.620098238s" podCreationTimestamp="2026-02-03 12:26:20 +0000 UTC" firstStartedPulling="2026-02-03 12:26:21.390154748 +0000 UTC m=+1253.865050846" lastFinishedPulling="2026-02-03 12:26:31.671518233 +0000 UTC m=+1264.146414321" observedRunningTime="2026-02-03 12:26:32.607803078 +0000 UTC m=+1265.082699166" watchObservedRunningTime="2026-02-03 12:26:32.620098238 +0000 UTC m=+1265.094994326" Feb 03 12:26:32 crc kubenswrapper[4679]: I0203 12:26:32.634940 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" podStartSLOduration=2.027438913 podStartE2EDuration="11.634916414s" podCreationTimestamp="2026-02-03 12:26:21 +0000 UTC" firstStartedPulling="2026-02-03 12:26:22.081087806 +0000 UTC m=+1254.555983904" lastFinishedPulling="2026-02-03 12:26:31.688565317 +0000 UTC m=+1264.163461405" observedRunningTime="2026-02-03 12:26:32.629986766 +0000 UTC m=+1265.104882864" watchObservedRunningTime="2026-02-03 12:26:32.634916414 +0000 UTC m=+1265.109812512" Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.595827 4679 generic.go:334] "Generic (PLEG): container finished" podID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerID="18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c" exitCode=0 Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.596228 4679 generic.go:334] "Generic (PLEG): container finished" podID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerID="7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27" exitCode=2 Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.596241 4679 generic.go:334] "Generic (PLEG): container finished" podID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerID="43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b" exitCode=0 Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.596633 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerDied","Data":"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c"} Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.596738 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerDied","Data":"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27"} Feb 03 12:26:33 crc kubenswrapper[4679]: I0203 12:26:33.596753 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerDied","Data":"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b"} Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.189685 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.228740 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.228815 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx7hn\" (UniqueName: \"kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.228967 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.229146 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.229205 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.229244 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.229348 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd\") pod \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\" (UID: \"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6\") " Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.231638 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.232097 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.253108 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts" (OuterVolumeSpecName: "scripts") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.258103 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.266636 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn" (OuterVolumeSpecName: "kube-api-access-xx7hn") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "kube-api-access-xx7hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.327573 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.331979 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.332257 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.332316 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx7hn\" (UniqueName: \"kubernetes.io/projected/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-kube-api-access-xx7hn\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.332394 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.332507 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.332632 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.351584 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data" (OuterVolumeSpecName: "config-data") pod "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" (UID: "dbc35de8-4583-41c7-b0e7-5a4c75a48ad6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.434712 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.624490 4679 generic.go:334] "Generic (PLEG): container finished" podID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerID="e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6" exitCode=0 Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.624554 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerDied","Data":"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6"} Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.624594 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dbc35de8-4583-41c7-b0e7-5a4c75a48ad6","Type":"ContainerDied","Data":"2eafe35e04719f543d5f44365632b746234a280c5d75ddc893187c411989ed3e"} Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.624616 4679 scope.go:117] "RemoveContainer" containerID="18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.624834 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.626688 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.627053 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-log" containerID="cri-o://467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480" gracePeriod=30 Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.627561 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-httpd" containerID="cri-o://d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec" gracePeriod=30 Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.665653 4679 scope.go:117] "RemoveContainer" containerID="7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.680501 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.697436 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.702062 4679 scope.go:117] "RemoveContainer" containerID="e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.718751 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.719663 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="sg-core" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.720927 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="sg-core" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.721083 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="proxy-httpd" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.721201 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="proxy-httpd" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.727288 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.732515 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.732842 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-notification-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.732998 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-notification-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.733117 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-central-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.733190 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-central-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.733284 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon-log" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.733394 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon-log" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.733948 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="proxy-httpd" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.734059 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.734145 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="sg-core" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.734224 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-notification-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.734301 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c53de0-396a-4234-969c-65e4c2227710" containerName="horizon-log" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.734392 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" containerName="ceilometer-central-agent" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.740215 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.740701 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.745777 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.747839 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.785099 4679 scope.go:117] "RemoveContainer" containerID="43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.810385 4679 scope.go:117] "RemoveContainer" containerID="18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.811047 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c\": container with ID starting with 18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c not found: ID does not exist" containerID="18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.811134 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c"} err="failed to get container status \"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c\": rpc error: code = NotFound desc = could not find container \"18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c\": container with ID starting with 18d326870e2c64d949d32f64deaa99d2cb0b5068107cc77fb96709e0f189ca7c not found: ID does not exist" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.811201 4679 scope.go:117] "RemoveContainer" containerID="7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.811579 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27\": container with ID starting with 7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27 not found: ID does not exist" containerID="7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.811611 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27"} err="failed to get container status \"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27\": rpc error: code = NotFound desc = could not find container \"7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27\": container with ID starting with 7b7b35135849df39bfec85c8a98bf6f182b2e75eaa5920fe801117a4dd297e27 not found: ID does not exist" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.811646 4679 scope.go:117] "RemoveContainer" containerID="e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.812252 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6\": container with ID starting with e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6 not found: ID does not exist" containerID="e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.812343 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6"} err="failed to get container status \"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6\": rpc error: code = NotFound desc = could not find container \"e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6\": container with ID starting with e46c1ea87265d4ab190355ae42711ce9cb60060c4327ac36dbb02dee33365cc6 not found: ID does not exist" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.812457 4679 scope.go:117] "RemoveContainer" containerID="43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b" Feb 03 12:26:34 crc kubenswrapper[4679]: E0203 12:26:34.812745 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b\": container with ID starting with 43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b not found: ID does not exist" containerID="43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.812764 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b"} err="failed to get container status \"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b\": rpc error: code = NotFound desc = could not find container \"43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b\": container with ID starting with 43e3b6f81b9cdbdd505c34cd7206598ef26942441a29ab1e1e9bc96c50c7a93b not found: ID does not exist" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.868777 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.868956 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.869063 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.869149 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.869279 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.869385 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctgq\" (UniqueName: \"kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.869473 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.972008 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctgq\" (UniqueName: \"kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.972826 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973113 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973268 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973349 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973516 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973609 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973774 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.973787 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.978041 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.978140 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.982548 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.986797 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:34 crc kubenswrapper[4679]: I0203 12:26:34.991790 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctgq\" (UniqueName: \"kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq\") pod \"ceilometer-0\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " pod="openstack/ceilometer-0" Feb 03 12:26:35 crc kubenswrapper[4679]: I0203 12:26:35.076343 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:35 crc kubenswrapper[4679]: I0203 12:26:35.580151 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:35 crc kubenswrapper[4679]: W0203 12:26:35.586283 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dcc9f1d_087a_4265_8398_0f03212f2afa.slice/crio-ce4c80931851223d35701ae4901c7d6ae02a38cc3d120d7cd018c567b22f326f WatchSource:0}: Error finding container ce4c80931851223d35701ae4901c7d6ae02a38cc3d120d7cd018c567b22f326f: Status 404 returned error can't find the container with id ce4c80931851223d35701ae4901c7d6ae02a38cc3d120d7cd018c567b22f326f Feb 03 12:26:35 crc kubenswrapper[4679]: I0203 12:26:35.639459 4679 generic.go:334] "Generic (PLEG): container finished" podID="91430e11-9487-4d11-96af-226beb9e2c1c" containerID="467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480" exitCode=143 Feb 03 12:26:35 crc kubenswrapper[4679]: I0203 12:26:35.639557 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerDied","Data":"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480"} Feb 03 12:26:35 crc kubenswrapper[4679]: I0203 12:26:35.642334 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerStarted","Data":"ce4c80931851223d35701ae4901c7d6ae02a38cc3d120d7cd018c567b22f326f"} Feb 03 12:26:36 crc kubenswrapper[4679]: I0203 12:26:36.226624 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc35de8-4583-41c7-b0e7-5a4c75a48ad6" path="/var/lib/kubelet/pods/dbc35de8-4583-41c7-b0e7-5a4c75a48ad6/volumes" Feb 03 12:26:37 crc kubenswrapper[4679]: I0203 12:26:37.672077 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerStarted","Data":"4f5d31e34dd8f0c320fbfcbc0377c108f9e3777e5c25e2e3939f8670843079ff"} Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.347160 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465583 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465744 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465808 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465854 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465878 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465911 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcsrr\" (UniqueName: \"kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465971 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.465993 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run\") pod \"91430e11-9487-4d11-96af-226beb9e2c1c\" (UID: \"91430e11-9487-4d11-96af-226beb9e2c1c\") " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.467478 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.468396 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs" (OuterVolumeSpecName: "logs") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.480227 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts" (OuterVolumeSpecName: "scripts") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.493864 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr" (OuterVolumeSpecName: "kube-api-access-jcsrr") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "kube-api-access-jcsrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.497205 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.512765 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.549227 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.568786 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.569291 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcsrr\" (UniqueName: \"kubernetes.io/projected/91430e11-9487-4d11-96af-226beb9e2c1c-kube-api-access-jcsrr\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.569444 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.573488 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91430e11-9487-4d11-96af-226beb9e2c1c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.573539 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.573550 4679 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.573560 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.569829 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data" (OuterVolumeSpecName: "config-data") pod "91430e11-9487-4d11-96af-226beb9e2c1c" (UID: "91430e11-9487-4d11-96af-226beb9e2c1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.595922 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.675835 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91430e11-9487-4d11-96af-226beb9e2c1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.676208 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.687140 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerStarted","Data":"91047b9ad100b5cd4d79adbe696be7d8fbcf541c46cac714227b1d3dbd089128"} Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.689714 4679 generic.go:334] "Generic (PLEG): container finished" podID="91430e11-9487-4d11-96af-226beb9e2c1c" containerID="d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec" exitCode=0 Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.689772 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.689785 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerDied","Data":"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec"} Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.690948 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91430e11-9487-4d11-96af-226beb9e2c1c","Type":"ContainerDied","Data":"ad0ac41f82b83a248e3c4f4ccd4ba9a6d6ea1e2f8a23ecf003c5d9f279c54f2e"} Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.690972 4679 scope.go:117] "RemoveContainer" containerID="d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.737741 4679 scope.go:117] "RemoveContainer" containerID="467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.760981 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.775630 4679 scope.go:117] "RemoveContainer" containerID="d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec" Feb 03 12:26:38 crc kubenswrapper[4679]: E0203 12:26:38.777867 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec\": container with ID starting with d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec not found: ID does not exist" containerID="d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.777960 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec"} err="failed to get container status \"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec\": rpc error: code = NotFound desc = could not find container \"d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec\": container with ID starting with d9a056cf7fd5160147ae87ddb3c9f5835cfea9d75d45434eee2705b39c31cfec not found: ID does not exist" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.777992 4679 scope.go:117] "RemoveContainer" containerID="467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480" Feb 03 12:26:38 crc kubenswrapper[4679]: E0203 12:26:38.778564 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480\": container with ID starting with 467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480 not found: ID does not exist" containerID="467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.778583 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480"} err="failed to get container status \"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480\": rpc error: code = NotFound desc = could not find container \"467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480\": container with ID starting with 467d021b2556b5b6051af73b19f66d1cd27b6ad3dea0647f3df4adefd6d1b480 not found: ID does not exist" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.781654 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.810550 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:38 crc kubenswrapper[4679]: E0203 12:26:38.814681 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-httpd" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.814713 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-httpd" Feb 03 12:26:38 crc kubenswrapper[4679]: E0203 12:26:38.814757 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-log" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.814765 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-log" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.822569 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-log" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.822647 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" containerName="glance-httpd" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.835322 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.838904 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.844265 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.861972 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.990901 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991000 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991026 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991061 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991084 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991107 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991158 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:38 crc kubenswrapper[4679]: I0203 12:26:38.991194 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmf7\" (UniqueName: \"kubernetes.io/projected/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-kube-api-access-4tmf7\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.098940 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099028 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099059 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099092 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099118 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099140 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099193 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.099223 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tmf7\" (UniqueName: \"kubernetes.io/projected/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-kube-api-access-4tmf7\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.100719 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.101557 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.101819 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.115965 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.116708 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.117369 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.120606 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.137113 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tmf7\" (UniqueName: \"kubernetes.io/projected/b1d9c6da-29c6-43e7-92a6-ee0c5901c36b-kube-api-access-4tmf7\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.156325 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b\") " pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.196394 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.707589 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerStarted","Data":"101dfae6cb9467e10d28767e0d9e4385353783bd701be0390bacde9ec81ebb97"} Feb 03 12:26:39 crc kubenswrapper[4679]: I0203 12:26:39.864562 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.228729 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91430e11-9487-4d11-96af-226beb9e2c1c" path="/var/lib/kubelet/pods/91430e11-9487-4d11-96af-226beb9e2c1c/volumes" Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.382594 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.382948 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-log" containerID="cri-o://951674e17f8fccdfaa4ab910c7c7efe2a6193020647d195b5e38143d23aab9ad" gracePeriod=30 Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.383058 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-httpd" containerID="cri-o://ebfc05804a632e39ca7bc642c84d42d0a82311fb2854c542dce70379f61c5ba3" gracePeriod=30 Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.722961 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b","Type":"ContainerStarted","Data":"58631dd2e11314fc90953630870ae6d8b416d3a97e3301880f7cd25620cf03a6"} Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.723379 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b","Type":"ContainerStarted","Data":"fc62c9b039083b237b6741abb4971af5e37c08d4dc0d6da6492f3a0430573e83"} Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.725143 4679 generic.go:334] "Generic (PLEG): container finished" podID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerID="951674e17f8fccdfaa4ab910c7c7efe2a6193020647d195b5e38143d23aab9ad" exitCode=143 Feb 03 12:26:40 crc kubenswrapper[4679]: I0203 12:26:40.725178 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerDied","Data":"951674e17f8fccdfaa4ab910c7c7efe2a6193020647d195b5e38143d23aab9ad"} Feb 03 12:26:41 crc kubenswrapper[4679]: I0203 12:26:41.738441 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerStarted","Data":"0cd9e5792981848bc987eedf8076d82106736ae1903f04409bff41603362e270"} Feb 03 12:26:41 crc kubenswrapper[4679]: I0203 12:26:41.738595 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:26:41 crc kubenswrapper[4679]: I0203 12:26:41.740410 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1d9c6da-29c6-43e7-92a6-ee0c5901c36b","Type":"ContainerStarted","Data":"fa5f0108e1e2222dbb7eb04348f660f63b34811e7944fe6a04ffa68a93afc4fe"} Feb 03 12:26:41 crc kubenswrapper[4679]: I0203 12:26:41.767603 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.390772349 podStartE2EDuration="7.767580203s" podCreationTimestamp="2026-02-03 12:26:34 +0000 UTC" firstStartedPulling="2026-02-03 12:26:35.59074796 +0000 UTC m=+1268.065644048" lastFinishedPulling="2026-02-03 12:26:40.967555814 +0000 UTC m=+1273.442451902" observedRunningTime="2026-02-03 12:26:41.763048925 +0000 UTC m=+1274.237945013" watchObservedRunningTime="2026-02-03 12:26:41.767580203 +0000 UTC m=+1274.242476291" Feb 03 12:26:41 crc kubenswrapper[4679]: I0203 12:26:41.791593 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.791560947 podStartE2EDuration="3.791560947s" podCreationTimestamp="2026-02-03 12:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:41.788078156 +0000 UTC m=+1274.262974254" watchObservedRunningTime="2026-02-03 12:26:41.791560947 +0000 UTC m=+1274.266457035" Feb 03 12:26:42 crc kubenswrapper[4679]: I0203 12:26:42.970900 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.769835 4679 generic.go:334] "Generic (PLEG): container finished" podID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerID="ebfc05804a632e39ca7bc642c84d42d0a82311fb2854c542dce70379f61c5ba3" exitCode=0 Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.770434 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerDied","Data":"ebfc05804a632e39ca7bc642c84d42d0a82311fb2854c542dce70379f61c5ba3"} Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.770590 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-central-agent" containerID="cri-o://4f5d31e34dd8f0c320fbfcbc0377c108f9e3777e5c25e2e3939f8670843079ff" gracePeriod=30 Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.770678 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="proxy-httpd" containerID="cri-o://0cd9e5792981848bc987eedf8076d82106736ae1903f04409bff41603362e270" gracePeriod=30 Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.770739 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="sg-core" containerID="cri-o://101dfae6cb9467e10d28767e0d9e4385353783bd701be0390bacde9ec81ebb97" gracePeriod=30 Feb 03 12:26:43 crc kubenswrapper[4679]: I0203 12:26:43.770756 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-notification-agent" containerID="cri-o://91047b9ad100b5cd4d79adbe696be7d8fbcf541c46cac714227b1d3dbd089128" gracePeriod=30 Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.091987 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.098333 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2f42cdec-52dd-4576-b6f4-aa2eb1dfb1d7" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.209319 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.209792 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.209881 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58p76\" (UniqueName: \"kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.209936 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210000 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210048 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210076 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210150 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs\") pod \"91d1ab2d-b565-4322-b193-3143ec9b5919\" (UID: \"91d1ab2d-b565-4322-b193-3143ec9b5919\") " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210444 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs" (OuterVolumeSpecName: "logs") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210763 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.210881 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.218023 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.222324 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts" (OuterVolumeSpecName: "scripts") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.222404 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76" (OuterVolumeSpecName: "kube-api-access-58p76") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "kube-api-access-58p76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.247181 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.269936 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.279454 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data" (OuterVolumeSpecName: "config-data") pod "91d1ab2d-b565-4322-b193-3143ec9b5919" (UID: "91d1ab2d-b565-4322-b193-3143ec9b5919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313270 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313317 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58p76\" (UniqueName: \"kubernetes.io/projected/91d1ab2d-b565-4322-b193-3143ec9b5919-kube-api-access-58p76\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313329 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313339 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313350 4679 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91d1ab2d-b565-4322-b193-3143ec9b5919-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313376 4679 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.313388 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d1ab2d-b565-4322-b193-3143ec9b5919-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.334717 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.416909 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.782905 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"91d1ab2d-b565-4322-b193-3143ec9b5919","Type":"ContainerDied","Data":"ed473b7574fc7aa2e64f78c7d3578c3498695e60742a060a3a8a8c6abb5f134d"} Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.782952 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.782985 4679 scope.go:117] "RemoveContainer" containerID="ebfc05804a632e39ca7bc642c84d42d0a82311fb2854c542dce70379f61c5ba3" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.785548 4679 generic.go:334] "Generic (PLEG): container finished" podID="217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" containerID="ca3a6340e71b1870165d2f67e1c402614fcd2a2305a8a088f5e06da07095f91f" exitCode=0 Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.785603 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" event={"ID":"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5","Type":"ContainerDied","Data":"ca3a6340e71b1870165d2f67e1c402614fcd2a2305a8a088f5e06da07095f91f"} Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789659 4679 generic.go:334] "Generic (PLEG): container finished" podID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerID="0cd9e5792981848bc987eedf8076d82106736ae1903f04409bff41603362e270" exitCode=0 Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789692 4679 generic.go:334] "Generic (PLEG): container finished" podID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerID="101dfae6cb9467e10d28767e0d9e4385353783bd701be0390bacde9ec81ebb97" exitCode=2 Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789701 4679 generic.go:334] "Generic (PLEG): container finished" podID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerID="91047b9ad100b5cd4d79adbe696be7d8fbcf541c46cac714227b1d3dbd089128" exitCode=0 Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789723 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerDied","Data":"0cd9e5792981848bc987eedf8076d82106736ae1903f04409bff41603362e270"} Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789749 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerDied","Data":"101dfae6cb9467e10d28767e0d9e4385353783bd701be0390bacde9ec81ebb97"} Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.789760 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerDied","Data":"91047b9ad100b5cd4d79adbe696be7d8fbcf541c46cac714227b1d3dbd089128"} Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.821612 4679 scope.go:117] "RemoveContainer" containerID="951674e17f8fccdfaa4ab910c7c7efe2a6193020647d195b5e38143d23aab9ad" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.825686 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.844780 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.860031 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:44 crc kubenswrapper[4679]: E0203 12:26:44.861186 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-httpd" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.861212 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-httpd" Feb 03 12:26:44 crc kubenswrapper[4679]: E0203 12:26:44.861241 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-log" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.861250 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-log" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.862811 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-httpd" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.862854 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" containerName="glance-log" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.865200 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.868324 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.868679 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.875623 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926306 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm9x\" (UniqueName: \"kubernetes.io/projected/1672261a-caab-4c72-9be3-78b40978e2cf-kube-api-access-5mm9x\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926413 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926446 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926473 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926521 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926544 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926582 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:44 crc kubenswrapper[4679]: I0203 12:26:44.926616 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.029955 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030065 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm9x\" (UniqueName: \"kubernetes.io/projected/1672261a-caab-4c72-9be3-78b40978e2cf-kube-api-access-5mm9x\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030105 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030127 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030137 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030158 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030208 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030239 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030283 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030924 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.030973 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1672261a-caab-4c72-9be3-78b40978e2cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.040846 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.042746 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.043691 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.052486 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm9x\" (UniqueName: \"kubernetes.io/projected/1672261a-caab-4c72-9be3-78b40978e2cf-kube-api-access-5mm9x\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.052534 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1672261a-caab-4c72-9be3-78b40978e2cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.063175 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"1672261a-caab-4c72-9be3-78b40978e2cf\") " pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.193894 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.765975 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 12:26:45 crc kubenswrapper[4679]: I0203 12:26:45.817803 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1672261a-caab-4c72-9be3-78b40978e2cf","Type":"ContainerStarted","Data":"5b8f32cba49f42d4d911c1719982b3845e66835c110bd5f0c359936cbf274bcf"} Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.214200 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.237753 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d1ab2d-b565-4322-b193-3143ec9b5919" path="/var/lib/kubelet/pods/91d1ab2d-b565-4322-b193-3143ec9b5919/volumes" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.282237 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle\") pod \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.282308 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzh4k\" (UniqueName: \"kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k\") pod \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.282341 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data\") pod \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.282454 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts\") pod \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\" (UID: \"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5\") " Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.292088 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts" (OuterVolumeSpecName: "scripts") pod "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" (UID: "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.292666 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k" (OuterVolumeSpecName: "kube-api-access-lzh4k") pod "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" (UID: "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5"). InnerVolumeSpecName "kube-api-access-lzh4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.317872 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" (UID: "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.322880 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data" (OuterVolumeSpecName: "config-data") pod "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" (UID: "217fbaf0-f384-44f2-a7ec-07fbc5eb38a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.386680 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.386722 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzh4k\" (UniqueName: \"kubernetes.io/projected/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-kube-api-access-lzh4k\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.386735 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.386743 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.858790 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" event={"ID":"217fbaf0-f384-44f2-a7ec-07fbc5eb38a5","Type":"ContainerDied","Data":"12d5c7aa3b5e9176b3f2620d7d51f14c2b44ff6488f428a48642b953732fb723"} Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.859224 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12d5c7aa3b5e9176b3f2620d7d51f14c2b44ff6488f428a48642b953732fb723" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.858837 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bx7p9" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.870146 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1672261a-caab-4c72-9be3-78b40978e2cf","Type":"ContainerStarted","Data":"5b97cb48a738786491235bcec9fd79a68fbe81788f74edf8dbe83309f06b1bdc"} Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.934118 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:26:46 crc kubenswrapper[4679]: E0203 12:26:46.948985 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" containerName="nova-cell0-conductor-db-sync" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.949130 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" containerName="nova-cell0-conductor-db-sync" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.949808 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" containerName="nova-cell0-conductor-db-sync" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.951819 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.955214 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-drjl6" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.955839 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 12:26:46 crc kubenswrapper[4679]: I0203 12:26:46.963321 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.103465 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.103580 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtfb\" (UniqueName: \"kubernetes.io/projected/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-kube-api-access-xqtfb\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.103983 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.205800 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.205884 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqtfb\" (UniqueName: \"kubernetes.io/projected/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-kube-api-access-xqtfb\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.206002 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.213516 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.213519 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.247164 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqtfb\" (UniqueName: \"kubernetes.io/projected/e72b3e9b-a5ec-43f1-a286-43f2ce2f5240-kube-api-access-xqtfb\") pod \"nova-cell0-conductor-0\" (UID: \"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240\") " pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.284866 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:47 crc kubenswrapper[4679]: W0203 12:26:47.823064 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72b3e9b_a5ec_43f1_a286_43f2ce2f5240.slice/crio-7151ec10b55776cfcfc88460e7b7cab6dce09e3e8d873602b6bd0da9f7db5e33 WatchSource:0}: Error finding container 7151ec10b55776cfcfc88460e7b7cab6dce09e3e8d873602b6bd0da9f7db5e33: Status 404 returned error can't find the container with id 7151ec10b55776cfcfc88460e7b7cab6dce09e3e8d873602b6bd0da9f7db5e33 Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.835425 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.885432 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240","Type":"ContainerStarted","Data":"7151ec10b55776cfcfc88460e7b7cab6dce09e3e8d873602b6bd0da9f7db5e33"} Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.889512 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1672261a-caab-4c72-9be3-78b40978e2cf","Type":"ContainerStarted","Data":"b5cacbc09cd248d755602017c88359342e76730e147e744811b0b6daeb27f8b3"} Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.896576 4679 generic.go:334] "Generic (PLEG): container finished" podID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerID="4f5d31e34dd8f0c320fbfcbc0377c108f9e3777e5c25e2e3939f8670843079ff" exitCode=0 Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.896650 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerDied","Data":"4f5d31e34dd8f0c320fbfcbc0377c108f9e3777e5c25e2e3939f8670843079ff"} Feb 03 12:26:47 crc kubenswrapper[4679]: I0203 12:26:47.928223 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.928196704 podStartE2EDuration="3.928196704s" podCreationTimestamp="2026-02-03 12:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:47.912332691 +0000 UTC m=+1280.387228799" watchObservedRunningTime="2026-02-03 12:26:47.928196704 +0000 UTC m=+1280.403092792" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.024914 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122413 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122582 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctgq\" (UniqueName: \"kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122689 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122747 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122820 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122871 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.122912 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd\") pod \"4dcc9f1d-087a-4265-8398-0f03212f2afa\" (UID: \"4dcc9f1d-087a-4265-8398-0f03212f2afa\") " Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.123859 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.124044 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.124069 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.128491 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts" (OuterVolumeSpecName: "scripts") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.140159 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq" (OuterVolumeSpecName: "kube-api-access-7ctgq") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "kube-api-access-7ctgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.152861 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.201457 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.225797 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.225836 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dcc9f1d-087a-4265-8398-0f03212f2afa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.225847 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.225862 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctgq\" (UniqueName: \"kubernetes.io/projected/4dcc9f1d-087a-4265-8398-0f03212f2afa-kube-api-access-7ctgq\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.225874 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.229229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data" (OuterVolumeSpecName: "config-data") pod "4dcc9f1d-087a-4265-8398-0f03212f2afa" (UID: "4dcc9f1d-087a-4265-8398-0f03212f2afa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.328047 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcc9f1d-087a-4265-8398-0f03212f2afa-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.910468 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dcc9f1d-087a-4265-8398-0f03212f2afa","Type":"ContainerDied","Data":"ce4c80931851223d35701ae4901c7d6ae02a38cc3d120d7cd018c567b22f326f"} Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.910912 4679 scope.go:117] "RemoveContainer" containerID="0cd9e5792981848bc987eedf8076d82106736ae1903f04409bff41603362e270" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.911004 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.915925 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e72b3e9b-a5ec-43f1-a286-43f2ce2f5240","Type":"ContainerStarted","Data":"463d4eee26d83e42c6015ad32b537c5cb0d3a9b9abbfd168dffae1460e8bd093"} Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.941886 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.941837995 podStartE2EDuration="2.941837995s" podCreationTimestamp="2026-02-03 12:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:26:48.934398801 +0000 UTC m=+1281.409294889" watchObservedRunningTime="2026-02-03 12:26:48.941837995 +0000 UTC m=+1281.416734083" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.964581 4679 scope.go:117] "RemoveContainer" containerID="101dfae6cb9467e10d28767e0d9e4385353783bd701be0390bacde9ec81ebb97" Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.967616 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.978545 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:48 crc kubenswrapper[4679]: I0203 12:26:48.997904 4679 scope.go:117] "RemoveContainer" containerID="91047b9ad100b5cd4d79adbe696be7d8fbcf541c46cac714227b1d3dbd089128" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.005939 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:49 crc kubenswrapper[4679]: E0203 12:26:49.006499 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="sg-core" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006522 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="sg-core" Feb 03 12:26:49 crc kubenswrapper[4679]: E0203 12:26:49.006539 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="proxy-httpd" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006546 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="proxy-httpd" Feb 03 12:26:49 crc kubenswrapper[4679]: E0203 12:26:49.006572 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-notification-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006580 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-notification-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: E0203 12:26:49.006599 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-central-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006606 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-central-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006785 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="proxy-httpd" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006804 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-central-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006816 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="sg-core" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.006825 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" containerName="ceilometer-notification-agent" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.008951 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.012457 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.012760 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.020465 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.047822 4679 scope.go:117] "RemoveContainer" containerID="4f5d31e34dd8f0c320fbfcbc0377c108f9e3777e5c25e2e3939f8670843079ff" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.142998 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143046 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143067 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4lx\" (UniqueName: \"kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143134 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143248 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143381 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.143407 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.197768 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.197850 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.242439 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246629 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246697 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246779 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246811 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246841 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4lx\" (UniqueName: \"kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.246931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.247046 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.252777 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.253705 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.254492 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.254754 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.260602 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.276937 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.292507 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.302478 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4lx\" (UniqueName: \"kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx\") pod \"ceilometer-0\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.347187 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.848266 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.953046 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerStarted","Data":"360ae2b5b45b7b2e8ceb0f895dfcaca925f058df8f2b37c261c4421ee1227961"} Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.954856 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.954903 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:49 crc kubenswrapper[4679]: I0203 12:26:49.954918 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 12:26:50 crc kubenswrapper[4679]: I0203 12:26:50.231704 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcc9f1d-087a-4265-8398-0f03212f2afa" path="/var/lib/kubelet/pods/4dcc9f1d-087a-4265-8398-0f03212f2afa/volumes" Feb 03 12:26:50 crc kubenswrapper[4679]: I0203 12:26:50.966594 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerStarted","Data":"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4"} Feb 03 12:26:51 crc kubenswrapper[4679]: I0203 12:26:51.978235 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:26:51 crc kubenswrapper[4679]: I0203 12:26:51.979614 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:26:51 crc kubenswrapper[4679]: I0203 12:26:51.978217 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerStarted","Data":"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b"} Feb 03 12:26:52 crc kubenswrapper[4679]: I0203 12:26:52.241485 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:26:52 crc kubenswrapper[4679]: I0203 12:26:52.308347 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 12:26:52 crc kubenswrapper[4679]: I0203 12:26:52.989584 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerStarted","Data":"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee"} Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.021993 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerStarted","Data":"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557"} Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.022721 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.050811 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.227674042 podStartE2EDuration="7.050790661s" podCreationTimestamp="2026-02-03 12:26:48 +0000 UTC" firstStartedPulling="2026-02-03 12:26:49.855774599 +0000 UTC m=+1282.330670697" lastFinishedPulling="2026-02-03 12:26:54.678891228 +0000 UTC m=+1287.153787316" observedRunningTime="2026-02-03 12:26:55.046071668 +0000 UTC m=+1287.520967756" watchObservedRunningTime="2026-02-03 12:26:55.050790661 +0000 UTC m=+1287.525686749" Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.194801 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.194856 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.241281 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:55 crc kubenswrapper[4679]: I0203 12:26:55.251679 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:56 crc kubenswrapper[4679]: I0203 12:26:56.038981 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:56 crc kubenswrapper[4679]: I0203 12:26:56.039573 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.332977 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.843590 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-js858"] Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.844768 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.847686 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.848040 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.868689 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-js858"] Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.943254 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.943378 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.943513 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mt5v\" (UniqueName: \"kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:57 crc kubenswrapper[4679]: I0203 12:26:57.943543 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.016866 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.018893 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.032924 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.044941 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.045030 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mt5v\" (UniqueName: \"kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.045058 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.045137 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.054962 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.059156 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.062616 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.064773 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.066325 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.066377 4679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.074296 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mt5v\" (UniqueName: \"kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v\") pod \"nova-cell0-cell-mapping-js858\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.146912 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwc6j\" (UniqueName: \"kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.147680 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.147789 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.164662 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.169405 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.175603 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.178325 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.208595 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.210308 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.218294 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.249277 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.249678 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250012 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250132 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250194 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbm8\" (UniqueName: \"kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250240 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwc6j\" (UniqueName: \"kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250276 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250312 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250398 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sql6\" (UniqueName: \"kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250466 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.250521 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.287445 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.291821 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.291864 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.291877 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.297383 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.300158 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.305704 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.306546 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwc6j\" (UniqueName: \"kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.320615 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352246 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352334 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789d7\" (UniqueName: \"kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352385 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352418 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352450 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbm8\" (UniqueName: \"kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352474 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352511 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sql6\" (UniqueName: \"kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352545 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352569 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352659 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.352757 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.353289 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.357251 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.361623 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.362204 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.364985 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.371443 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.373965 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbm8\" (UniqueName: \"kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8\") pod \"nova-metadata-0\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.377093 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sql6\" (UniqueName: \"kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6\") pod \"nova-api-0\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.384080 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.385789 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.394272 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.454337 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.454731 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.454770 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.454856 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.454881 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.455044 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789d7\" (UniqueName: \"kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.455099 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.455181 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljjz\" (UniqueName: \"kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.455207 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.455663 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.461277 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.465905 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.475978 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.506121 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.508690 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789d7\" (UniqueName: \"kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7\") pod \"nova-scheduler-0\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.525260 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.538528 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.539160 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559164 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ljjz\" (UniqueName: \"kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559207 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559294 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559388 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559466 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.559553 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.565153 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.565165 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.566523 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.566601 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.567647 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.599735 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ljjz\" (UniqueName: \"kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz\") pod \"dnsmasq-dns-757b4f8459-fdmfq\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.871841 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:26:58 crc kubenswrapper[4679]: I0203 12:26:58.936659 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-js858"] Feb 03 12:26:58 crc kubenswrapper[4679]: W0203 12:26:58.941850 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a8dcc4_d726_4b2c_b22d_bd9d3ba48c0f.slice/crio-68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318 WatchSource:0}: Error finding container 68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318: Status 404 returned error can't find the container with id 68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318 Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.082275 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-js858" event={"ID":"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f","Type":"ContainerStarted","Data":"68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318"} Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.310126 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.434461 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqr7h"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.436642 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.445021 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.445137 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.445858 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqr7h"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.498673 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hff26\" (UniqueName: \"kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.498730 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.498757 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.498801 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.600997 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hff26\" (UniqueName: \"kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.601073 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.601103 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.601146 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.608735 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.609570 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.616567 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.626935 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hff26\" (UniqueName: \"kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26\") pod \"nova-cell1-conductor-db-sync-qqr7h\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.694702 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.751131 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.764199 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.769649 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:26:59 crc kubenswrapper[4679]: I0203 12:26:59.793588 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.098391 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-js858" event={"ID":"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f","Type":"ContainerStarted","Data":"444d740cad8ef7944e636fc1d6d2ed31382209228fac6ebce3653ededa713933"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.105636 4679 generic.go:334] "Generic (PLEG): container finished" podID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerID="46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a" exitCode=0 Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.105836 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" event={"ID":"26335cab-653d-46b0-97a2-a8b4ba9ebdcc","Type":"ContainerDied","Data":"46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.105868 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" event={"ID":"26335cab-653d-46b0-97a2-a8b4ba9ebdcc","Type":"ContainerStarted","Data":"454a2852db4fad4c0ea6e2c3665cac7575cf10b3988f66d0b0f6d83a24087078"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.109718 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerStarted","Data":"b4db6e2d11768f923f76ad9f4bc6fa860ff8199e0663260bfabf8b68e23017a0"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.116198 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c51123a8-f43c-413f-9752-215f4ae1a2b2","Type":"ContainerStarted","Data":"595c1cd1f8c36d07e43227e1abb455c94318678a1fc1759dbda590c22aa66343"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.145543 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerStarted","Data":"a68721e91e489cf9c21526311d15a2a14bc4d9dd4059ff5ca4c3e687f9c3ec59"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.165318 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-js858" podStartSLOduration=3.165292517 podStartE2EDuration="3.165292517s" podCreationTimestamp="2026-02-03 12:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:00.126850466 +0000 UTC m=+1292.601746554" watchObservedRunningTime="2026-02-03 12:27:00.165292517 +0000 UTC m=+1292.640188605" Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.174689 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"544a72bc-27cb-4720-a8c7-a6607732ceae","Type":"ContainerStarted","Data":"745feb7d5dca138c336d13c5410bad4b18d08fba7fa2113204578f818737fa28"} Feb 03 12:27:00 crc kubenswrapper[4679]: I0203 12:27:00.374731 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqr7h"] Feb 03 12:27:00 crc kubenswrapper[4679]: W0203 12:27:00.407080 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8b88a0_ed8f_4446_ac3c_dc76a0c191b5.slice/crio-5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc WatchSource:0}: Error finding container 5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc: Status 404 returned error can't find the container with id 5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc Feb 03 12:27:01 crc kubenswrapper[4679]: I0203 12:27:01.219001 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" event={"ID":"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5","Type":"ContainerStarted","Data":"bdec05c6e8bb0daab322f0eed2cd57bef2c6004a7dcc17a1842022c6304ce8dc"} Feb 03 12:27:01 crc kubenswrapper[4679]: I0203 12:27:01.219585 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" event={"ID":"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5","Type":"ContainerStarted","Data":"5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc"} Feb 03 12:27:01 crc kubenswrapper[4679]: I0203 12:27:01.224382 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" event={"ID":"26335cab-653d-46b0-97a2-a8b4ba9ebdcc","Type":"ContainerStarted","Data":"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5"} Feb 03 12:27:01 crc kubenswrapper[4679]: I0203 12:27:01.252227 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" podStartSLOduration=2.2521994530000002 podStartE2EDuration="2.252199453s" podCreationTimestamp="2026-02-03 12:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:01.237411358 +0000 UTC m=+1293.712307466" watchObservedRunningTime="2026-02-03 12:27:01.252199453 +0000 UTC m=+1293.727095541" Feb 03 12:27:01 crc kubenswrapper[4679]: I0203 12:27:01.267759 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" podStartSLOduration=3.267733288 podStartE2EDuration="3.267733288s" podCreationTimestamp="2026-02-03 12:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:01.25858998 +0000 UTC m=+1293.733486088" watchObservedRunningTime="2026-02-03 12:27:01.267733288 +0000 UTC m=+1293.742629376" Feb 03 12:27:02 crc kubenswrapper[4679]: I0203 12:27:02.245638 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:27:02 crc kubenswrapper[4679]: I0203 12:27:02.450126 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:02 crc kubenswrapper[4679]: I0203 12:27:02.464229 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.270998 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c51123a8-f43c-413f-9752-215f4ae1a2b2","Type":"ContainerStarted","Data":"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.272665 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c51123a8-f43c-413f-9752-215f4ae1a2b2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5" gracePeriod=30 Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.289712 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerStarted","Data":"a92eeb2c5d3eb30dfd60ee6d9798719b7a76c62fcb55765e146b6eb799e3e3a9"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.289758 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerStarted","Data":"d16beca63b9def5f0d4d58879296ecb92597a816b12f4c61293dca9b4b247eaa"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.294376 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"544a72bc-27cb-4720-a8c7-a6607732ceae","Type":"ContainerStarted","Data":"35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.303294 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.5539057339999998 podStartE2EDuration="7.303269158s" podCreationTimestamp="2026-02-03 12:26:57 +0000 UTC" firstStartedPulling="2026-02-03 12:26:59.322462714 +0000 UTC m=+1291.797358802" lastFinishedPulling="2026-02-03 12:27:03.071826138 +0000 UTC m=+1295.546722226" observedRunningTime="2026-02-03 12:27:04.302667922 +0000 UTC m=+1296.777564010" watchObservedRunningTime="2026-02-03 12:27:04.303269158 +0000 UTC m=+1296.778165246" Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.306618 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerStarted","Data":"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.306669 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerStarted","Data":"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b"} Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.306826 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-log" containerID="cri-o://654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" gracePeriod=30 Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.306988 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-metadata" containerID="cri-o://6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" gracePeriod=30 Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.322724 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.929660525 podStartE2EDuration="6.322703473s" podCreationTimestamp="2026-02-03 12:26:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:59.6860855 +0000 UTC m=+1292.160981588" lastFinishedPulling="2026-02-03 12:27:03.079128438 +0000 UTC m=+1295.554024536" observedRunningTime="2026-02-03 12:27:04.321119242 +0000 UTC m=+1296.796015340" watchObservedRunningTime="2026-02-03 12:27:04.322703473 +0000 UTC m=+1296.797599561" Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.349160 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.001776594 podStartE2EDuration="6.349135972s" podCreationTimestamp="2026-02-03 12:26:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:59.726807181 +0000 UTC m=+1292.201703269" lastFinishedPulling="2026-02-03 12:27:03.074166539 +0000 UTC m=+1295.549062647" observedRunningTime="2026-02-03 12:27:04.344347147 +0000 UTC m=+1296.819243235" watchObservedRunningTime="2026-02-03 12:27:04.349135972 +0000 UTC m=+1296.824032060" Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.369590 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.028990441 podStartE2EDuration="6.369562803s" podCreationTimestamp="2026-02-03 12:26:58 +0000 UTC" firstStartedPulling="2026-02-03 12:26:59.731900523 +0000 UTC m=+1292.206796611" lastFinishedPulling="2026-02-03 12:27:03.072472865 +0000 UTC m=+1295.547368973" observedRunningTime="2026-02-03 12:27:04.362612953 +0000 UTC m=+1296.837509071" watchObservedRunningTime="2026-02-03 12:27:04.369562803 +0000 UTC m=+1296.844458891" Feb 03 12:27:04 crc kubenswrapper[4679]: I0203 12:27:04.989934 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.135818 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzbm8\" (UniqueName: \"kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8\") pod \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.135980 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs\") pod \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.136049 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data\") pod \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.136128 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle\") pod \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\" (UID: \"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5\") " Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.139236 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs" (OuterVolumeSpecName: "logs") pod "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" (UID: "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.145860 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8" (OuterVolumeSpecName: "kube-api-access-pzbm8") pod "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" (UID: "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5"). InnerVolumeSpecName "kube-api-access-pzbm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.192695 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" (UID: "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.199082 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data" (OuterVolumeSpecName: "config-data") pod "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" (UID: "4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.242228 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.242297 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.242319 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.242416 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzbm8\" (UniqueName: \"kubernetes.io/projected/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5-kube-api-access-pzbm8\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.326330 4679 generic.go:334] "Generic (PLEG): container finished" podID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerID="6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" exitCode=0 Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.326426 4679 generic.go:334] "Generic (PLEG): container finished" podID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerID="654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" exitCode=143 Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.326399 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.326394 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerDied","Data":"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035"} Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.328070 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerDied","Data":"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b"} Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.328085 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5","Type":"ContainerDied","Data":"b4db6e2d11768f923f76ad9f4bc6fa860ff8199e0663260bfabf8b68e23017a0"} Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.328105 4679 scope.go:117] "RemoveContainer" containerID="6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.372068 4679 scope.go:117] "RemoveContainer" containerID="654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.398835 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.416632 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.431409 4679 scope.go:117] "RemoveContainer" containerID="6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" Feb 03 12:27:05 crc kubenswrapper[4679]: E0203 12:27:05.432621 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035\": container with ID starting with 6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035 not found: ID does not exist" containerID="6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.432666 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035"} err="failed to get container status \"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035\": rpc error: code = NotFound desc = could not find container \"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035\": container with ID starting with 6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035 not found: ID does not exist" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.432711 4679 scope.go:117] "RemoveContainer" containerID="654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" Feb 03 12:27:05 crc kubenswrapper[4679]: E0203 12:27:05.433584 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b\": container with ID starting with 654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b not found: ID does not exist" containerID="654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.433612 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b"} err="failed to get container status \"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b\": rpc error: code = NotFound desc = could not find container \"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b\": container with ID starting with 654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b not found: ID does not exist" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.433633 4679 scope.go:117] "RemoveContainer" containerID="6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.433978 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035"} err="failed to get container status \"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035\": rpc error: code = NotFound desc = could not find container \"6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035\": container with ID starting with 6ff6b0525bed68ac1b8e23325b0fe165ed1eb28164473d57f4aef02e3131f035 not found: ID does not exist" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.434004 4679 scope.go:117] "RemoveContainer" containerID="654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.434563 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b"} err="failed to get container status \"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b\": rpc error: code = NotFound desc = could not find container \"654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b\": container with ID starting with 654c9017055ed7c2c5eefb9a54ced1e1dc5ab781de606010ac832d1b2a24403b not found: ID does not exist" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.445435 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:05 crc kubenswrapper[4679]: E0203 12:27:05.446115 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-log" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.446160 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-log" Feb 03 12:27:05 crc kubenswrapper[4679]: E0203 12:27:05.446173 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-metadata" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.446180 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-metadata" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.446595 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-metadata" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.446620 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" containerName="nova-metadata-log" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.448050 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.450980 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.453876 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.474331 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.550520 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.550632 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlln\" (UniqueName: \"kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.550672 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.550798 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.550851 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.652709 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.652788 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.652901 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.652957 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlln\" (UniqueName: \"kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.652991 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.653797 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.659132 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.659250 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.668822 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.706951 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlln\" (UniqueName: \"kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln\") pod \"nova-metadata-0\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " pod="openstack/nova-metadata-0" Feb 03 12:27:05 crc kubenswrapper[4679]: I0203 12:27:05.775013 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:06 crc kubenswrapper[4679]: I0203 12:27:06.225867 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5" path="/var/lib/kubelet/pods/4d11c2c0-17be-4b4b-9fb7-4ec97280e0f5/volumes" Feb 03 12:27:06 crc kubenswrapper[4679]: I0203 12:27:06.272994 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:06 crc kubenswrapper[4679]: W0203 12:27:06.277954 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc2d0b8_54f3_4d94_a95c_f2ede426b381.slice/crio-081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2 WatchSource:0}: Error finding container 081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2: Status 404 returned error can't find the container with id 081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2 Feb 03 12:27:06 crc kubenswrapper[4679]: I0203 12:27:06.338627 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerStarted","Data":"081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2"} Feb 03 12:27:07 crc kubenswrapper[4679]: I0203 12:27:07.354895 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerStarted","Data":"2114b8cc51f1bc8c4f634b9e01a84a647505145cffd235237d39cca0a12f71cb"} Feb 03 12:27:07 crc kubenswrapper[4679]: I0203 12:27:07.355253 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerStarted","Data":"22621cfc1b54423d2673c789621a5822c9114633c4cf7769520cd5acdaba53c7"} Feb 03 12:27:07 crc kubenswrapper[4679]: I0203 12:27:07.386436 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.386412303 podStartE2EDuration="2.386412303s" podCreationTimestamp="2026-02-03 12:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:07.383005241 +0000 UTC m=+1299.857901359" watchObservedRunningTime="2026-02-03 12:27:07.386412303 +0000 UTC m=+1299.861308391" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.366583 4679 generic.go:334] "Generic (PLEG): container finished" podID="30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" containerID="444d740cad8ef7944e636fc1d6d2ed31382209228fac6ebce3653ededa713933" exitCode=0 Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.366684 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-js858" event={"ID":"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f","Type":"ContainerDied","Data":"444d740cad8ef7944e636fc1d6d2ed31382209228fac6ebce3653ededa713933"} Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.465617 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.477062 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.477120 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.526876 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.526952 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.552919 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.869637 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.954569 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:27:08 crc kubenswrapper[4679]: I0203 12:27:08.954918 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="dnsmasq-dns" containerID="cri-o://fd64be3c7267c16b4462d55443bfe38b05347f083d3440bf06e18a78c62abfba" gracePeriod=10 Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.381773 4679 generic.go:334] "Generic (PLEG): container finished" podID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerID="fd64be3c7267c16b4462d55443bfe38b05347f083d3440bf06e18a78c62abfba" exitCode=0 Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.381800 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" event={"ID":"826b897e-db6e-4e5c-a3a4-c4b78b1cd377","Type":"ContainerDied","Data":"fd64be3c7267c16b4462d55443bfe38b05347f083d3440bf06e18a78c62abfba"} Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.383924 4679 generic.go:334] "Generic (PLEG): container finished" podID="ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" containerID="bdec05c6e8bb0daab322f0eed2cd57bef2c6004a7dcc17a1842022c6304ce8dc" exitCode=0 Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.383954 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" event={"ID":"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5","Type":"ContainerDied","Data":"bdec05c6e8bb0daab322f0eed2cd57bef2c6004a7dcc17a1842022c6304ce8dc"} Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.433902 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.559650 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.559668 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.575988 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.753633 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.753723 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t29bh\" (UniqueName: \"kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.753884 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.753982 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.754005 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.754025 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc\") pod \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\" (UID: \"826b897e-db6e-4e5c-a3a4-c4b78b1cd377\") " Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.767213 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh" (OuterVolumeSpecName: "kube-api-access-t29bh") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "kube-api-access-t29bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.829435 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config" (OuterVolumeSpecName: "config") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.851102 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.856604 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t29bh\" (UniqueName: \"kubernetes.io/projected/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-kube-api-access-t29bh\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.856644 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.856656 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.858080 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.892085 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.898098 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "826b897e-db6e-4e5c-a3a4-c4b78b1cd377" (UID: "826b897e-db6e-4e5c-a3a4-c4b78b1cd377"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.922214 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.958241 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.958317 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:09 crc kubenswrapper[4679]: I0203 12:27:09.958331 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/826b897e-db6e-4e5c-a3a4-c4b78b1cd377-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.059622 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts\") pod \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.059752 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mt5v\" (UniqueName: \"kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v\") pod \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.059775 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data\") pod \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.059818 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle\") pod \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\" (UID: \"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f\") " Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.064657 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts" (OuterVolumeSpecName: "scripts") pod "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" (UID: "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.065122 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v" (OuterVolumeSpecName: "kube-api-access-5mt5v") pod "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" (UID: "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f"). InnerVolumeSpecName "kube-api-access-5mt5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.105454 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" (UID: "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.109923 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data" (OuterVolumeSpecName: "config-data") pod "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" (UID: "30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.162667 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.162725 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mt5v\" (UniqueName: \"kubernetes.io/projected/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-kube-api-access-5mt5v\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.162741 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.162754 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.394393 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-js858" event={"ID":"30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f","Type":"ContainerDied","Data":"68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318"} Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.395579 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c02fcd8a4ce51080a2af0891108eeee757863a0922731cb07315f0025c2318" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.394399 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-js858" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.397590 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.398800 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d29d9" event={"ID":"826b897e-db6e-4e5c-a3a4-c4b78b1cd377","Type":"ContainerDied","Data":"42d779c1a47c15ce01285413c81f48c0e3e1c4c6088150d85a4593a0969aec40"} Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.398844 4679 scope.go:117] "RemoveContainer" containerID="fd64be3c7267c16b4462d55443bfe38b05347f083d3440bf06e18a78c62abfba" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.436810 4679 scope.go:117] "RemoveContainer" containerID="fd14f27cac64aec0c46d763e4b85336924a7a2c29f3ae1c3ec0acc8f42535002" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.458401 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.469288 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d29d9"] Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.652529 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.653192 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-log" containerID="cri-o://d16beca63b9def5f0d4d58879296ecb92597a816b12f4c61293dca9b4b247eaa" gracePeriod=30 Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.653767 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-api" containerID="cri-o://a92eeb2c5d3eb30dfd60ee6d9798719b7a76c62fcb55765e146b6eb799e3e3a9" gracePeriod=30 Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.668576 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.675921 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.676226 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-log" containerID="cri-o://22621cfc1b54423d2673c789621a5822c9114633c4cf7769520cd5acdaba53c7" gracePeriod=30 Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.676479 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-metadata" containerID="cri-o://2114b8cc51f1bc8c4f634b9e01a84a647505145cffd235237d39cca0a12f71cb" gracePeriod=30 Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.775548 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.775617 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:27:10 crc kubenswrapper[4679]: I0203 12:27:10.929699 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.088063 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data\") pod \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.088701 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts\") pod \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.088819 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hff26\" (UniqueName: \"kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26\") pod \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.088937 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle\") pod \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\" (UID: \"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.104707 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26" (OuterVolumeSpecName: "kube-api-access-hff26") pod "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" (UID: "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5"). InnerVolumeSpecName "kube-api-access-hff26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.107669 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts" (OuterVolumeSpecName: "scripts") pod "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" (UID: "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.134586 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data" (OuterVolumeSpecName: "config-data") pod "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" (UID: "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.140670 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" (UID: "ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.191960 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.192009 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.192019 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hff26\" (UniqueName: \"kubernetes.io/projected/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-kube-api-access-hff26\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.192030 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.408302 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" event={"ID":"ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5","Type":"ContainerDied","Data":"5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc"} Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.408383 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed80f1390a286ff08190b0cda2445612451777572585e04a1d35f30bd9b12cc" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.408460 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqr7h" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.412879 4679 generic.go:334] "Generic (PLEG): container finished" podID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerID="d16beca63b9def5f0d4d58879296ecb92597a816b12f4c61293dca9b4b247eaa" exitCode=143 Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.412958 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerDied","Data":"d16beca63b9def5f0d4d58879296ecb92597a816b12f4c61293dca9b4b247eaa"} Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415102 4679 generic.go:334] "Generic (PLEG): container finished" podID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerID="2114b8cc51f1bc8c4f634b9e01a84a647505145cffd235237d39cca0a12f71cb" exitCode=0 Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415136 4679 generic.go:334] "Generic (PLEG): container finished" podID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerID="22621cfc1b54423d2673c789621a5822c9114633c4cf7769520cd5acdaba53c7" exitCode=143 Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415190 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerDied","Data":"2114b8cc51f1bc8c4f634b9e01a84a647505145cffd235237d39cca0a12f71cb"} Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415221 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerDied","Data":"22621cfc1b54423d2673c789621a5822c9114633c4cf7769520cd5acdaba53c7"} Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415232 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc2d0b8-54f3-4d94-a95c-f2ede426b381","Type":"ContainerDied","Data":"081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2"} Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.415242 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="081be6d74cd79b4c8e2c8bf652be34addf62b613aa86e0c658e031f84a1ed9d2" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.417676 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerName="nova-scheduler-scheduler" containerID="cri-o://35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" gracePeriod=30 Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.467636 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.563596 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564087 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="init" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564103 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="init" Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564123 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" containerName="nova-cell1-conductor-db-sync" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564130 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" containerName="nova-cell1-conductor-db-sync" Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564136 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-metadata" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564143 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-metadata" Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564157 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="dnsmasq-dns" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564164 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="dnsmasq-dns" Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564175 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" containerName="nova-manage" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564181 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" containerName="nova-manage" Feb 03 12:27:11 crc kubenswrapper[4679]: E0203 12:27:11.564197 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-log" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.564203 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-log" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.575808 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" containerName="nova-manage" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.575870 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-log" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.575893 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" containerName="nova-metadata-metadata" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.575915 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" containerName="dnsmasq-dns" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.575938 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" containerName="nova-cell1-conductor-db-sync" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.576662 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.577998 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.582502 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.599155 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle\") pod \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.599259 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data\") pod \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.599331 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs\") pod \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.599377 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlln\" (UniqueName: \"kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln\") pod \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.599430 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs\") pod \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\" (UID: \"2fc2d0b8-54f3-4d94-a95c-f2ede426b381\") " Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.600347 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs" (OuterVolumeSpecName: "logs") pod "2fc2d0b8-54f3-4d94-a95c-f2ede426b381" (UID: "2fc2d0b8-54f3-4d94-a95c-f2ede426b381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.617870 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln" (OuterVolumeSpecName: "kube-api-access-9qlln") pod "2fc2d0b8-54f3-4d94-a95c-f2ede426b381" (UID: "2fc2d0b8-54f3-4d94-a95c-f2ede426b381"). InnerVolumeSpecName "kube-api-access-9qlln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.630421 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data" (OuterVolumeSpecName: "config-data") pod "2fc2d0b8-54f3-4d94-a95c-f2ede426b381" (UID: "2fc2d0b8-54f3-4d94-a95c-f2ede426b381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.636556 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fc2d0b8-54f3-4d94-a95c-f2ede426b381" (UID: "2fc2d0b8-54f3-4d94-a95c-f2ede426b381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.684547 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2fc2d0b8-54f3-4d94-a95c-f2ede426b381" (UID: "2fc2d0b8-54f3-4d94-a95c-f2ede426b381"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701021 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701095 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4q2\" (UniqueName: \"kubernetes.io/projected/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-kube-api-access-cq4q2\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701171 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701282 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701294 4679 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701306 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlln\" (UniqueName: \"kubernetes.io/projected/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-kube-api-access-9qlln\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701317 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.701325 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc2d0b8-54f3-4d94-a95c-f2ede426b381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.803474 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.803939 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.803974 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4q2\" (UniqueName: \"kubernetes.io/projected/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-kube-api-access-cq4q2\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.808014 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.808305 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.822455 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4q2\" (UniqueName: \"kubernetes.io/projected/ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9-kube-api-access-cq4q2\") pod \"nova-cell1-conductor-0\" (UID: \"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9\") " pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:11 crc kubenswrapper[4679]: I0203 12:27:11.895890 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.225161 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826b897e-db6e-4e5c-a3a4-c4b78b1cd377" path="/var/lib/kubelet/pods/826b897e-db6e-4e5c-a3a4-c4b78b1cd377/volumes" Feb 03 12:27:12 crc kubenswrapper[4679]: W0203 12:27:12.379281 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff89b1cf_ed12_47b9_a9ca_2f9c1a5d35d9.slice/crio-8e1aeb24a3a2e1965984ec8df32a7a6d87331644e79c19aa0647d2245a0678a1 WatchSource:0}: Error finding container 8e1aeb24a3a2e1965984ec8df32a7a6d87331644e79c19aa0647d2245a0678a1: Status 404 returned error can't find the container with id 8e1aeb24a3a2e1965984ec8df32a7a6d87331644e79c19aa0647d2245a0678a1 Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.381594 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.452861 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9","Type":"ContainerStarted","Data":"8e1aeb24a3a2e1965984ec8df32a7a6d87331644e79c19aa0647d2245a0678a1"} Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.452925 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.558327 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.570051 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.588088 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.591038 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.593825 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.594725 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.598400 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.730263 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.730404 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284j7\" (UniqueName: \"kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.730587 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.730624 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.730644 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.832246 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.832308 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.832469 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.832509 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284j7\" (UniqueName: \"kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.832594 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.833809 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.839324 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.841591 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.848440 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.858998 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284j7\" (UniqueName: \"kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7\") pod \"nova-metadata-0\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " pod="openstack/nova-metadata-0" Feb 03 12:27:12 crc kubenswrapper[4679]: I0203 12:27:12.908987 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:13 crc kubenswrapper[4679]: I0203 12:27:13.378545 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:13 crc kubenswrapper[4679]: I0203 12:27:13.467812 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9","Type":"ContainerStarted","Data":"4a8b8775dc1f9bae045d5ddbc5ad0e8a28b9ca2e1648bef127e632233ccb6a97"} Feb 03 12:27:13 crc kubenswrapper[4679]: I0203 12:27:13.468503 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:13 crc kubenswrapper[4679]: I0203 12:27:13.471021 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerStarted","Data":"ff21d31c421b4b1dad2963838c8dcb6a699fced6a7257a025a05bed8ba72feb3"} Feb 03 12:27:13 crc kubenswrapper[4679]: I0203 12:27:13.485147 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.485127918 podStartE2EDuration="2.485127918s" podCreationTimestamp="2026-02-03 12:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:13.484880622 +0000 UTC m=+1305.959776720" watchObservedRunningTime="2026-02-03 12:27:13.485127918 +0000 UTC m=+1305.960024006" Feb 03 12:27:13 crc kubenswrapper[4679]: E0203 12:27:13.529323 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:13 crc kubenswrapper[4679]: E0203 12:27:13.530647 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:13 crc kubenswrapper[4679]: E0203 12:27:13.532319 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:13 crc kubenswrapper[4679]: E0203 12:27:13.532387 4679 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerName="nova-scheduler-scheduler" Feb 03 12:27:14 crc kubenswrapper[4679]: I0203 12:27:14.224316 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc2d0b8-54f3-4d94-a95c-f2ede426b381" path="/var/lib/kubelet/pods/2fc2d0b8-54f3-4d94-a95c-f2ede426b381/volumes" Feb 03 12:27:14 crc kubenswrapper[4679]: I0203 12:27:14.482642 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerStarted","Data":"19f783f3bb0e53b9312493e4a6fe7224a0a7f436d01360361631f1fe4f307ced"} Feb 03 12:27:14 crc kubenswrapper[4679]: I0203 12:27:14.482699 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerStarted","Data":"e6d5560b9cf46d3d95f66c3555d739c3c59a6eafea3d02913d6e8e8342244cec"} Feb 03 12:27:14 crc kubenswrapper[4679]: I0203 12:27:14.514710 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.514680038 podStartE2EDuration="2.514680038s" podCreationTimestamp="2026-02-03 12:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:14.501495741 +0000 UTC m=+1306.976391839" watchObservedRunningTime="2026-02-03 12:27:14.514680038 +0000 UTC m=+1306.989576126" Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.496245 4679 generic.go:334] "Generic (PLEG): container finished" podID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerID="35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" exitCode=0 Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.496424 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"544a72bc-27cb-4720-a8c7-a6607732ceae","Type":"ContainerDied","Data":"35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a"} Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.851394 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.995588 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789d7\" (UniqueName: \"kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7\") pod \"544a72bc-27cb-4720-a8c7-a6607732ceae\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.995696 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data\") pod \"544a72bc-27cb-4720-a8c7-a6607732ceae\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " Feb 03 12:27:15 crc kubenswrapper[4679]: I0203 12:27:15.995808 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle\") pod \"544a72bc-27cb-4720-a8c7-a6607732ceae\" (UID: \"544a72bc-27cb-4720-a8c7-a6607732ceae\") " Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.009913 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7" (OuterVolumeSpecName: "kube-api-access-789d7") pod "544a72bc-27cb-4720-a8c7-a6607732ceae" (UID: "544a72bc-27cb-4720-a8c7-a6607732ceae"). InnerVolumeSpecName "kube-api-access-789d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.030023 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "544a72bc-27cb-4720-a8c7-a6607732ceae" (UID: "544a72bc-27cb-4720-a8c7-a6607732ceae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.033532 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data" (OuterVolumeSpecName: "config-data") pod "544a72bc-27cb-4720-a8c7-a6607732ceae" (UID: "544a72bc-27cb-4720-a8c7-a6607732ceae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.098575 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-789d7\" (UniqueName: \"kubernetes.io/projected/544a72bc-27cb-4720-a8c7-a6607732ceae-kube-api-access-789d7\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.098626 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.098641 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544a72bc-27cb-4720-a8c7-a6607732ceae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.533041 4679 generic.go:334] "Generic (PLEG): container finished" podID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerID="a92eeb2c5d3eb30dfd60ee6d9798719b7a76c62fcb55765e146b6eb799e3e3a9" exitCode=0 Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.533596 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerDied","Data":"a92eeb2c5d3eb30dfd60ee6d9798719b7a76c62fcb55765e146b6eb799e3e3a9"} Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.533642 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f47a553-a55f-4b11-b9cc-b50e9f547c12","Type":"ContainerDied","Data":"a68721e91e489cf9c21526311d15a2a14bc4d9dd4059ff5ca4c3e687f9c3ec59"} Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.533661 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68721e91e489cf9c21526311d15a2a14bc4d9dd4059ff5ca4c3e687f9c3ec59" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.536190 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"544a72bc-27cb-4720-a8c7-a6607732ceae","Type":"ContainerDied","Data":"745feb7d5dca138c336d13c5410bad4b18d08fba7fa2113204578f818737fa28"} Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.536295 4679 scope.go:117] "RemoveContainer" containerID="35d6b74b2c491664e2025e9754b47d80af5317a30572f75ee6b917617716990a" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.536850 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.573438 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.588533 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.603724 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.614120 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:16 crc kubenswrapper[4679]: E0203 12:27:16.615083 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-api" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615118 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-api" Feb 03 12:27:16 crc kubenswrapper[4679]: E0203 12:27:16.615166 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-log" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615177 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-log" Feb 03 12:27:16 crc kubenswrapper[4679]: E0203 12:27:16.615191 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerName="nova-scheduler-scheduler" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615198 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerName="nova-scheduler-scheduler" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615468 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" containerName="nova-scheduler-scheduler" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615492 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-api" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.615512 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" containerName="nova-api-log" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.616510 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.619526 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.656158 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.712924 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sql6\" (UniqueName: \"kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6\") pod \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713095 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle\") pod \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713264 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs\") pod \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713302 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data\") pod \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\" (UID: \"0f47a553-a55f-4b11-b9cc-b50e9f547c12\") " Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713593 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713631 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713674 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66g9\" (UniqueName: \"kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.713755 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs" (OuterVolumeSpecName: "logs") pod "0f47a553-a55f-4b11-b9cc-b50e9f547c12" (UID: "0f47a553-a55f-4b11-b9cc-b50e9f547c12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.717688 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6" (OuterVolumeSpecName: "kube-api-access-4sql6") pod "0f47a553-a55f-4b11-b9cc-b50e9f547c12" (UID: "0f47a553-a55f-4b11-b9cc-b50e9f547c12"). InnerVolumeSpecName "kube-api-access-4sql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.744712 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f47a553-a55f-4b11-b9cc-b50e9f547c12" (UID: "0f47a553-a55f-4b11-b9cc-b50e9f547c12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.747650 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data" (OuterVolumeSpecName: "config-data") pod "0f47a553-a55f-4b11-b9cc-b50e9f547c12" (UID: "0f47a553-a55f-4b11-b9cc-b50e9f547c12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815675 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815727 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815749 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66g9\" (UniqueName: \"kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815900 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815913 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f47a553-a55f-4b11-b9cc-b50e9f547c12-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815923 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f47a553-a55f-4b11-b9cc-b50e9f547c12-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.815931 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sql6\" (UniqueName: \"kubernetes.io/projected/0f47a553-a55f-4b11-b9cc-b50e9f547c12-kube-api-access-4sql6\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.821185 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.822250 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.836948 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66g9\" (UniqueName: \"kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9\") pod \"nova-scheduler-0\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " pod="openstack/nova-scheduler-0" Feb 03 12:27:16 crc kubenswrapper[4679]: I0203 12:27:16.943125 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:27:17 crc kubenswrapper[4679]: W0203 12:27:17.395476 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312934bf_297f_4589_b2cb_8d2abfc3ba2f.slice/crio-a0507688a51cd14bbc3df7c3a06b5cc7601d22fa7997ccb6adb825d275bd2512 WatchSource:0}: Error finding container a0507688a51cd14bbc3df7c3a06b5cc7601d22fa7997ccb6adb825d275bd2512: Status 404 returned error can't find the container with id a0507688a51cd14bbc3df7c3a06b5cc7601d22fa7997ccb6adb825d275bd2512 Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.399509 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.549131 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"312934bf-297f-4589-b2cb-8d2abfc3ba2f","Type":"ContainerStarted","Data":"a0507688a51cd14bbc3df7c3a06b5cc7601d22fa7997ccb6adb825d275bd2512"} Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.551392 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.597001 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.619116 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.633730 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.635518 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.639834 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.649145 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.732484 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.732606 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.732657 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.732692 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9cz4\" (UniqueName: \"kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.834780 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.834853 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9cz4\" (UniqueName: \"kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.834937 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.835016 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.836117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.839796 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.843760 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.855090 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9cz4\" (UniqueName: \"kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4\") pod \"nova-api-0\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " pod="openstack/nova-api-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.910529 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.910589 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:27:17 crc kubenswrapper[4679]: I0203 12:27:17.971388 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.226688 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f47a553-a55f-4b11-b9cc-b50e9f547c12" path="/var/lib/kubelet/pods/0f47a553-a55f-4b11-b9cc-b50e9f547c12/volumes" Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.227569 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544a72bc-27cb-4720-a8c7-a6607732ceae" path="/var/lib/kubelet/pods/544a72bc-27cb-4720-a8c7-a6607732ceae/volumes" Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.416408 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:18 crc kubenswrapper[4679]: W0203 12:27:18.423438 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf412966a_4fb9_4922_840f_99365637f9ac.slice/crio-5a06866819f4d353cb184e5e3fc254450a3ac7bb0b2433434a781391ab0c1ce9 WatchSource:0}: Error finding container 5a06866819f4d353cb184e5e3fc254450a3ac7bb0b2433434a781391ab0c1ce9: Status 404 returned error can't find the container with id 5a06866819f4d353cb184e5e3fc254450a3ac7bb0b2433434a781391ab0c1ce9 Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.563567 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"312934bf-297f-4589-b2cb-8d2abfc3ba2f","Type":"ContainerStarted","Data":"b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a"} Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.565747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerStarted","Data":"5a06866819f4d353cb184e5e3fc254450a3ac7bb0b2433434a781391ab0c1ce9"} Feb 03 12:27:18 crc kubenswrapper[4679]: I0203 12:27:18.584848 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.58482616 podStartE2EDuration="2.58482616s" podCreationTimestamp="2026-02-03 12:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:18.582305879 +0000 UTC m=+1311.057202007" watchObservedRunningTime="2026-02-03 12:27:18.58482616 +0000 UTC m=+1311.059722258" Feb 03 12:27:19 crc kubenswrapper[4679]: I0203 12:27:19.354751 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:27:19 crc kubenswrapper[4679]: I0203 12:27:19.579577 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerStarted","Data":"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939"} Feb 03 12:27:19 crc kubenswrapper[4679]: I0203 12:27:19.579641 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerStarted","Data":"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707"} Feb 03 12:27:19 crc kubenswrapper[4679]: I0203 12:27:19.631644 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.631620854 podStartE2EDuration="2.631620854s" podCreationTimestamp="2026-02-03 12:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:19.617501394 +0000 UTC m=+1312.092397502" watchObservedRunningTime="2026-02-03 12:27:19.631620854 +0000 UTC m=+1312.106516942" Feb 03 12:27:21 crc kubenswrapper[4679]: I0203 12:27:21.932988 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 12:27:21 crc kubenswrapper[4679]: I0203 12:27:21.944033 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:27:22 crc kubenswrapper[4679]: I0203 12:27:22.908116 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:22 crc kubenswrapper[4679]: I0203 12:27:22.908412 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" containerName="kube-state-metrics" containerID="cri-o://efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec" gracePeriod=30 Feb 03 12:27:22 crc kubenswrapper[4679]: I0203 12:27:22.909783 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:27:22 crc kubenswrapper[4679]: I0203 12:27:22.909820 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.472437 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.585487 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7hq\" (UniqueName: \"kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq\") pod \"cfc55122-b95a-43ed-bec8-9262c84e0fa5\" (UID: \"cfc55122-b95a-43ed-bec8-9262c84e0fa5\") " Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.595624 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq" (OuterVolumeSpecName: "kube-api-access-xf7hq") pod "cfc55122-b95a-43ed-bec8-9262c84e0fa5" (UID: "cfc55122-b95a-43ed-bec8-9262c84e0fa5"). InnerVolumeSpecName "kube-api-access-xf7hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.624529 4679 generic.go:334] "Generic (PLEG): container finished" podID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" containerID="efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec" exitCode=2 Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.624569 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc55122-b95a-43ed-bec8-9262c84e0fa5","Type":"ContainerDied","Data":"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec"} Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.624588 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.624605 4679 scope.go:117] "RemoveContainer" containerID="efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.624594 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc55122-b95a-43ed-bec8-9262c84e0fa5","Type":"ContainerDied","Data":"9dd93e633e7b3509c40665378ca93dc79837dbcac652188227169c6b723b68bc"} Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.652795 4679 scope.go:117] "RemoveContainer" containerID="efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec" Feb 03 12:27:23 crc kubenswrapper[4679]: E0203 12:27:23.653469 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec\": container with ID starting with efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec not found: ID does not exist" containerID="efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.653518 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec"} err="failed to get container status \"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec\": rpc error: code = NotFound desc = could not find container \"efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec\": container with ID starting with efb52b0c7b80491ee0c72d3f005612224bc92ca4cca166434105ffda6d2eadec not found: ID does not exist" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.669347 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.681136 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.687795 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7hq\" (UniqueName: \"kubernetes.io/projected/cfc55122-b95a-43ed-bec8-9262c84e0fa5-kube-api-access-xf7hq\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.692594 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:23 crc kubenswrapper[4679]: E0203 12:27:23.693675 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" containerName="kube-state-metrics" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.693796 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" containerName="kube-state-metrics" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.694100 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" containerName="kube-state-metrics" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.695070 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.697309 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.697748 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.726074 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.789598 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.789722 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt629\" (UniqueName: \"kubernetes.io/projected/cba05e44-77a6-4a44-84c6-8bb482680662-kube-api-access-wt629\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.789764 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.789819 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.891854 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.891981 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.892063 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt629\" (UniqueName: \"kubernetes.io/projected/cba05e44-77a6-4a44-84c6-8bb482680662-kube-api-access-wt629\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.892104 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.897056 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.903053 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.910546 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.910673 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba05e44-77a6-4a44-84c6-8bb482680662-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.910943 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:23 crc kubenswrapper[4679]: I0203 12:27:23.924835 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt629\" (UniqueName: \"kubernetes.io/projected/cba05e44-77a6-4a44-84c6-8bb482680662-kube-api-access-wt629\") pod \"kube-state-metrics-0\" (UID: \"cba05e44-77a6-4a44-84c6-8bb482680662\") " pod="openstack/kube-state-metrics-0" Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.014863 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.225563 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc55122-b95a-43ed-bec8-9262c84e0fa5" path="/var/lib/kubelet/pods/cfc55122-b95a-43ed-bec8-9262c84e0fa5/volumes" Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.545206 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.644833 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cba05e44-77a6-4a44-84c6-8bb482680662","Type":"ContainerStarted","Data":"a30dc7612994296b9a7b62a10c893dabb0ef62a433efe6af1bccb9ef203982c5"} Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.719717 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.720262 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-central-agent" containerID="cri-o://7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4" gracePeriod=30 Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.720393 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="proxy-httpd" containerID="cri-o://b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557" gracePeriod=30 Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.720436 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="sg-core" containerID="cri-o://0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee" gracePeriod=30 Feb 03 12:27:24 crc kubenswrapper[4679]: I0203 12:27:24.720467 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-notification-agent" containerID="cri-o://3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b" gracePeriod=30 Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667231 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerID="b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557" exitCode=0 Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667596 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerID="0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee" exitCode=2 Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667606 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerID="7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4" exitCode=0 Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667293 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerDied","Data":"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557"} Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667682 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerDied","Data":"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee"} Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.667710 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerDied","Data":"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4"} Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.670656 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cba05e44-77a6-4a44-84c6-8bb482680662","Type":"ContainerStarted","Data":"22266811f1103ed8bef8058d010ea34c4df36986cc9d4ffdcc70a2e7f051c891"} Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.670994 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 12:27:25 crc kubenswrapper[4679]: I0203 12:27:25.696647 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.332843202 podStartE2EDuration="2.696614098s" podCreationTimestamp="2026-02-03 12:27:23 +0000 UTC" firstStartedPulling="2026-02-03 12:27:24.556555675 +0000 UTC m=+1317.031451773" lastFinishedPulling="2026-02-03 12:27:24.920326571 +0000 UTC m=+1317.395222669" observedRunningTime="2026-02-03 12:27:25.691797612 +0000 UTC m=+1318.166693740" watchObservedRunningTime="2026-02-03 12:27:25.696614098 +0000 UTC m=+1318.171510256" Feb 03 12:27:26 crc kubenswrapper[4679]: I0203 12:27:26.944988 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:27:26 crc kubenswrapper[4679]: I0203 12:27:26.981139 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:27:27 crc kubenswrapper[4679]: I0203 12:27:27.731842 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:27:27 crc kubenswrapper[4679]: I0203 12:27:27.972925 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:27:27 crc kubenswrapper[4679]: I0203 12:27:27.972987 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.015645 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.063669 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.297196 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432297 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432422 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432483 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432587 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432686 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432707 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.432748 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds4lx\" (UniqueName: \"kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx\") pod \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\" (UID: \"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab\") " Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.435120 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.435552 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.442901 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx" (OuterVolumeSpecName: "kube-api-access-ds4lx") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "kube-api-access-ds4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.445906 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts" (OuterVolumeSpecName: "scripts") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.478961 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.517018 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535399 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535443 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535455 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds4lx\" (UniqueName: \"kubernetes.io/projected/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-kube-api-access-ds4lx\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535470 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535480 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.535491 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.540550 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data" (OuterVolumeSpecName: "config-data") pod "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" (UID: "f4814bb4-4cf1-4e8e-8e9e-11e95b750bab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.637136 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.716588 4679 generic.go:334] "Generic (PLEG): container finished" podID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerID="3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b" exitCode=0 Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.716647 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerDied","Data":"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b"} Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.716669 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.716681 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4814bb4-4cf1-4e8e-8e9e-11e95b750bab","Type":"ContainerDied","Data":"360ae2b5b45b7b2e8ceb0f895dfcaca925f058df8f2b37c261c4421ee1227961"} Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.716703 4679 scope.go:117] "RemoveContainer" containerID="b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.739171 4679 scope.go:117] "RemoveContainer" containerID="0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.772261 4679 scope.go:117] "RemoveContainer" containerID="3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.772549 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.785755 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.798853 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.799302 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-notification-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799326 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-notification-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.799337 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="proxy-httpd" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799343 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="proxy-httpd" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.799403 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-central-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799411 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-central-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.799423 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="sg-core" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799430 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="sg-core" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799603 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-central-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799616 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="sg-core" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799631 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="proxy-httpd" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.799647 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" containerName="ceilometer-notification-agent" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.802230 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.806376 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.807932 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.808065 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.809922 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.820447 4679 scope.go:117] "RemoveContainer" containerID="7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.843477 4679 scope.go:117] "RemoveContainer" containerID="b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.843957 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557\": container with ID starting with b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557 not found: ID does not exist" containerID="b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844010 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557"} err="failed to get container status \"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557\": rpc error: code = NotFound desc = could not find container \"b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557\": container with ID starting with b0e73d3cd04e34a6b0287ad16cbeea658178240323287770bbc06dfb12ffa557 not found: ID does not exist" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844034 4679 scope.go:117] "RemoveContainer" containerID="0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.844432 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee\": container with ID starting with 0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee not found: ID does not exist" containerID="0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844460 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee"} err="failed to get container status \"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee\": rpc error: code = NotFound desc = could not find container \"0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee\": container with ID starting with 0d2d4d1930c5cb88ca6029a0a7e43974aa57aeabf23ff5cc08e97a8168a932ee not found: ID does not exist" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844476 4679 scope.go:117] "RemoveContainer" containerID="3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.844773 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b\": container with ID starting with 3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b not found: ID does not exist" containerID="3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844800 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b"} err="failed to get container status \"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b\": rpc error: code = NotFound desc = could not find container \"3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b\": container with ID starting with 3a6c5af74f7ecaeb1e0bb3bc8574af03e17d2ee0884c710424bfb243dea5724b not found: ID does not exist" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.844814 4679 scope.go:117] "RemoveContainer" containerID="7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4" Feb 03 12:27:29 crc kubenswrapper[4679]: E0203 12:27:29.845131 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4\": container with ID starting with 7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4 not found: ID does not exist" containerID="7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.845151 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4"} err="failed to get container status \"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4\": rpc error: code = NotFound desc = could not find container \"7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4\": container with ID starting with 7dabd7f89097c9210e357ffe65da6b702f3e184e27a2eba91961844735e95ba4 not found: ID does not exist" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945447 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945527 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945583 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945636 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945670 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4tcd\" (UniqueName: \"kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945704 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945725 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:29 crc kubenswrapper[4679]: I0203 12:27:29.945741 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047071 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047160 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047217 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047275 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047304 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4tcd\" (UniqueName: \"kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047336 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047368 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.047386 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.048579 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.048837 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.054338 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.054677 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.054775 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.054969 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.063577 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.066624 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4tcd\" (UniqueName: \"kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd\") pod \"ceilometer-0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.130261 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.233481 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4814bb4-4cf1-4e8e-8e9e-11e95b750bab" path="/var/lib/kubelet/pods/f4814bb4-4cf1-4e8e-8e9e-11e95b750bab/volumes" Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.621904 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:30 crc kubenswrapper[4679]: I0203 12:27:30.727806 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerStarted","Data":"3643b536a45031432f0518147c01360b3e1066e03d8a53f3f6844603dfbd19e7"} Feb 03 12:27:31 crc kubenswrapper[4679]: I0203 12:27:31.740525 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerStarted","Data":"708f09245f6e37ee760fb3cb3418d490c8186264e7d02b72c80d2021fa1ce0a6"} Feb 03 12:27:32 crc kubenswrapper[4679]: I0203 12:27:32.761828 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerStarted","Data":"f15cf61d1141c6d77e655a543af6792cc18e4e830cb276a20b4168511275a227"} Feb 03 12:27:32 crc kubenswrapper[4679]: I0203 12:27:32.915851 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:27:32 crc kubenswrapper[4679]: I0203 12:27:32.919695 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:27:32 crc kubenswrapper[4679]: I0203 12:27:32.923740 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:27:33 crc kubenswrapper[4679]: I0203 12:27:33.776028 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerStarted","Data":"76e9a88e9471941cbdd440bf10c841f2bba62120f4cb9a1ed65862c2f5334378"} Feb 03 12:27:33 crc kubenswrapper[4679]: I0203 12:27:33.782688 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.035265 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.737748 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.784060 4679 generic.go:334] "Generic (PLEG): container finished" podID="c51123a8-f43c-413f-9752-215f4ae1a2b2" containerID="16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5" exitCode=137 Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.784109 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.784100 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c51123a8-f43c-413f-9752-215f4ae1a2b2","Type":"ContainerDied","Data":"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5"} Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.784162 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c51123a8-f43c-413f-9752-215f4ae1a2b2","Type":"ContainerDied","Data":"595c1cd1f8c36d07e43227e1abb455c94318678a1fc1759dbda590c22aa66343"} Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.784184 4679 scope.go:117] "RemoveContainer" containerID="16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.807599 4679 scope.go:117] "RemoveContainer" containerID="16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5" Feb 03 12:27:34 crc kubenswrapper[4679]: E0203 12:27:34.808168 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5\": container with ID starting with 16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5 not found: ID does not exist" containerID="16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.808240 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5"} err="failed to get container status \"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5\": rpc error: code = NotFound desc = could not find container \"16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5\": container with ID starting with 16cf0dd9119537917c0cc1091c1105e2c3530b1dd85d7b18a1691c8a9bfdcfa5 not found: ID does not exist" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.851034 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle\") pod \"c51123a8-f43c-413f-9752-215f4ae1a2b2\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.851141 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data\") pod \"c51123a8-f43c-413f-9752-215f4ae1a2b2\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.851177 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwc6j\" (UniqueName: \"kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j\") pod \"c51123a8-f43c-413f-9752-215f4ae1a2b2\" (UID: \"c51123a8-f43c-413f-9752-215f4ae1a2b2\") " Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.856982 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j" (OuterVolumeSpecName: "kube-api-access-zwc6j") pod "c51123a8-f43c-413f-9752-215f4ae1a2b2" (UID: "c51123a8-f43c-413f-9752-215f4ae1a2b2"). InnerVolumeSpecName "kube-api-access-zwc6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.889160 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data" (OuterVolumeSpecName: "config-data") pod "c51123a8-f43c-413f-9752-215f4ae1a2b2" (UID: "c51123a8-f43c-413f-9752-215f4ae1a2b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.891960 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c51123a8-f43c-413f-9752-215f4ae1a2b2" (UID: "c51123a8-f43c-413f-9752-215f4ae1a2b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.953398 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.953439 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51123a8-f43c-413f-9752-215f4ae1a2b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:34 crc kubenswrapper[4679]: I0203 12:27:34.953452 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwc6j\" (UniqueName: \"kubernetes.io/projected/c51123a8-f43c-413f-9752-215f4ae1a2b2-kube-api-access-zwc6j\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.118868 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.133290 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.143033 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:35 crc kubenswrapper[4679]: E0203 12:27:35.143571 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51123a8-f43c-413f-9752-215f4ae1a2b2" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.143593 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51123a8-f43c-413f-9752-215f4ae1a2b2" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.143841 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51123a8-f43c-413f-9752-215f4ae1a2b2" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.144608 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.146751 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.147284 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.147538 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.154571 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.257609 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vnb\" (UniqueName: \"kubernetes.io/projected/bafb8aaf-7819-4978-aaae-7d26a4a126b6-kube-api-access-t4vnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.257716 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.257739 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.257788 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.257835 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.359149 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.359200 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.359290 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.359352 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.359421 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vnb\" (UniqueName: \"kubernetes.io/projected/bafb8aaf-7819-4978-aaae-7d26a4a126b6-kube-api-access-t4vnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.365838 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.366407 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.366444 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.378600 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafb8aaf-7819-4978-aaae-7d26a4a126b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.382847 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vnb\" (UniqueName: \"kubernetes.io/projected/bafb8aaf-7819-4978-aaae-7d26a4a126b6-kube-api-access-t4vnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"bafb8aaf-7819-4978-aaae-7d26a4a126b6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.473599 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.800005 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerStarted","Data":"c21e41a191c7e1e33cce604e8271a3c0becd51da1a13e7860ca7a9fe4938f077"} Feb 03 12:27:35 crc kubenswrapper[4679]: I0203 12:27:35.833855 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.027747475 podStartE2EDuration="6.833808171s" podCreationTimestamp="2026-02-03 12:27:29 +0000 UTC" firstStartedPulling="2026-02-03 12:27:30.625333418 +0000 UTC m=+1323.100229516" lastFinishedPulling="2026-02-03 12:27:35.431394114 +0000 UTC m=+1327.906290212" observedRunningTime="2026-02-03 12:27:35.822329745 +0000 UTC m=+1328.297225833" watchObservedRunningTime="2026-02-03 12:27:35.833808171 +0000 UTC m=+1328.308704259" Feb 03 12:27:36 crc kubenswrapper[4679]: W0203 12:27:36.001850 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbafb8aaf_7819_4978_aaae_7d26a4a126b6.slice/crio-2c35097a7126a8d1649554dced0da47f70a2416c91d87d45e192ce845edddaf0 WatchSource:0}: Error finding container 2c35097a7126a8d1649554dced0da47f70a2416c91d87d45e192ce845edddaf0: Status 404 returned error can't find the container with id 2c35097a7126a8d1649554dced0da47f70a2416c91d87d45e192ce845edddaf0 Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.008130 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.225140 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51123a8-f43c-413f-9752-215f4ae1a2b2" path="/var/lib/kubelet/pods/c51123a8-f43c-413f-9752-215f4ae1a2b2/volumes" Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.813340 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bafb8aaf-7819-4978-aaae-7d26a4a126b6","Type":"ContainerStarted","Data":"a16c7fd658597e9fb32573f77972410ac1820d412c0c3f40606798a4477b4c40"} Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.813741 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bafb8aaf-7819-4978-aaae-7d26a4a126b6","Type":"ContainerStarted","Data":"2c35097a7126a8d1649554dced0da47f70a2416c91d87d45e192ce845edddaf0"} Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.813764 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:27:36 crc kubenswrapper[4679]: I0203 12:27:36.861812 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.861794333 podStartE2EDuration="1.861794333s" podCreationTimestamp="2026-02-03 12:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:36.856899095 +0000 UTC m=+1329.331795193" watchObservedRunningTime="2026-02-03 12:27:36.861794333 +0000 UTC m=+1329.336690421" Feb 03 12:27:37 crc kubenswrapper[4679]: I0203 12:27:37.975350 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:27:37 crc kubenswrapper[4679]: I0203 12:27:37.976212 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:27:37 crc kubenswrapper[4679]: I0203 12:27:37.980192 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:27:37 crc kubenswrapper[4679]: I0203 12:27:37.981057 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:27:38 crc kubenswrapper[4679]: I0203 12:27:38.832649 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:27:38 crc kubenswrapper[4679]: I0203 12:27:38.839444 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.028383 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.029953 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.038774 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154352 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154443 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154521 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154547 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65p7b\" (UniqueName: \"kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154597 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.154616 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256712 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256769 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256827 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256859 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256931 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.256959 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65p7b\" (UniqueName: \"kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.257848 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.258036 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.258229 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.258493 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.258888 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.277944 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65p7b\" (UniqueName: \"kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b\") pod \"dnsmasq-dns-89c5cd4d5-zvwqj\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.375945 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:39 crc kubenswrapper[4679]: I0203 12:27:39.921154 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:27:39 crc kubenswrapper[4679]: W0203 12:27:39.924413 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9284bf_bc48_4115_af0d_c5a4db772cb4.slice/crio-d6ce5e5744e10b64c0814522e458460b160a5822f76ebd1909d55d4fd43b88cf WatchSource:0}: Error finding container d6ce5e5744e10b64c0814522e458460b160a5822f76ebd1909d55d4fd43b88cf: Status 404 returned error can't find the container with id d6ce5e5744e10b64c0814522e458460b160a5822f76ebd1909d55d4fd43b88cf Feb 03 12:27:40 crc kubenswrapper[4679]: I0203 12:27:40.475485 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:40 crc kubenswrapper[4679]: I0203 12:27:40.852232 4679 generic.go:334] "Generic (PLEG): container finished" podID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerID="5e4d901c8231372e73fc8279e4bd64753495730e28a8aa7bfe6c582ceeeacc86" exitCode=0 Feb 03 12:27:40 crc kubenswrapper[4679]: I0203 12:27:40.852330 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" event={"ID":"0d9284bf-bc48-4115-af0d-c5a4db772cb4","Type":"ContainerDied","Data":"5e4d901c8231372e73fc8279e4bd64753495730e28a8aa7bfe6c582ceeeacc86"} Feb 03 12:27:40 crc kubenswrapper[4679]: I0203 12:27:40.852414 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" event={"ID":"0d9284bf-bc48-4115-af0d-c5a4db772cb4","Type":"ContainerStarted","Data":"d6ce5e5744e10b64c0814522e458460b160a5822f76ebd1909d55d4fd43b88cf"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.372809 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.373474 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-central-agent" containerID="cri-o://708f09245f6e37ee760fb3cb3418d490c8186264e7d02b72c80d2021fa1ce0a6" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.373570 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="proxy-httpd" containerID="cri-o://c21e41a191c7e1e33cce604e8271a3c0becd51da1a13e7860ca7a9fe4938f077" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.373614 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-notification-agent" containerID="cri-o://f15cf61d1141c6d77e655a543af6792cc18e4e830cb276a20b4168511275a227" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.373589 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="sg-core" containerID="cri-o://76e9a88e9471941cbdd440bf10c841f2bba62120f4cb9a1ed65862c2f5334378" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.493091 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.867236 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" event={"ID":"0d9284bf-bc48-4115-af0d-c5a4db772cb4","Type":"ContainerStarted","Data":"30e2a26710b5a1fd73bcd8655d3892ed0d02e49dda3974067958ba91b005f017"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.867756 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899184 4679 generic.go:334] "Generic (PLEG): container finished" podID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerID="c21e41a191c7e1e33cce604e8271a3c0becd51da1a13e7860ca7a9fe4938f077" exitCode=0 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899222 4679 generic.go:334] "Generic (PLEG): container finished" podID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerID="76e9a88e9471941cbdd440bf10c841f2bba62120f4cb9a1ed65862c2f5334378" exitCode=2 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899232 4679 generic.go:334] "Generic (PLEG): container finished" podID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerID="f15cf61d1141c6d77e655a543af6792cc18e4e830cb276a20b4168511275a227" exitCode=0 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899242 4679 generic.go:334] "Generic (PLEG): container finished" podID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerID="708f09245f6e37ee760fb3cb3418d490c8186264e7d02b72c80d2021fa1ce0a6" exitCode=0 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899266 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerDied","Data":"c21e41a191c7e1e33cce604e8271a3c0becd51da1a13e7860ca7a9fe4938f077"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899332 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerDied","Data":"76e9a88e9471941cbdd440bf10c841f2bba62120f4cb9a1ed65862c2f5334378"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899346 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerDied","Data":"f15cf61d1141c6d77e655a543af6792cc18e4e830cb276a20b4168511275a227"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899371 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerDied","Data":"708f09245f6e37ee760fb3cb3418d490c8186264e7d02b72c80d2021fa1ce0a6"} Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899469 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-log" containerID="cri-o://73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.899947 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-api" containerID="cri-o://4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939" gracePeriod=30 Feb 03 12:27:41 crc kubenswrapper[4679]: I0203 12:27:41.906860 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" podStartSLOduration=3.906841388 podStartE2EDuration="3.906841388s" podCreationTimestamp="2026-02-03 12:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:41.895808252 +0000 UTC m=+1334.370704360" watchObservedRunningTime="2026-02-03 12:27:41.906841388 +0000 UTC m=+1334.381737476" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.162936 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224050 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224125 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224221 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224269 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224346 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224418 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224504 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224531 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4tcd\" (UniqueName: \"kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd\") pod \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\" (UID: \"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0\") " Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.224632 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.225086 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.225652 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.230477 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd" (OuterVolumeSpecName: "kube-api-access-j4tcd") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "kube-api-access-j4tcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.235464 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts" (OuterVolumeSpecName: "scripts") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.276333 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.295560 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.324711 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.327784 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.328656 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.328672 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.328680 4679 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.328691 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4tcd\" (UniqueName: \"kubernetes.io/projected/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-kube-api-access-j4tcd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.328783 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.355938 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data" (OuterVolumeSpecName: "config-data") pod "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" (UID: "e1fd37ff-b6c7-4448-95bc-bb17b925ecf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.430218 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.914147 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1fd37ff-b6c7-4448-95bc-bb17b925ecf0","Type":"ContainerDied","Data":"3643b536a45031432f0518147c01360b3e1066e03d8a53f3f6844603dfbd19e7"} Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.915021 4679 scope.go:117] "RemoveContainer" containerID="c21e41a191c7e1e33cce604e8271a3c0becd51da1a13e7860ca7a9fe4938f077" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.914397 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.919581 4679 generic.go:334] "Generic (PLEG): container finished" podID="f412966a-4fb9-4922-840f-99365637f9ac" containerID="73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707" exitCode=143 Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.919648 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerDied","Data":"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707"} Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.942079 4679 scope.go:117] "RemoveContainer" containerID="76e9a88e9471941cbdd440bf10c841f2bba62120f4cb9a1ed65862c2f5334378" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.967192 4679 scope.go:117] "RemoveContainer" containerID="f15cf61d1141c6d77e655a543af6792cc18e4e830cb276a20b4168511275a227" Feb 03 12:27:42 crc kubenswrapper[4679]: I0203 12:27:42.985207 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.042195 4679 scope.go:117] "RemoveContainer" containerID="708f09245f6e37ee760fb3cb3418d490c8186264e7d02b72c80d2021fa1ce0a6" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.044171 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.056908 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: E0203 12:27:43.057495 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-notification-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057521 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-notification-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: E0203 12:27:43.057549 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="sg-core" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057559 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="sg-core" Feb 03 12:27:43 crc kubenswrapper[4679]: E0203 12:27:43.057577 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-central-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057586 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-central-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: E0203 12:27:43.057606 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="proxy-httpd" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057614 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="proxy-httpd" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057876 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-notification-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057917 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="sg-core" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057935 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="proxy-httpd" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.057958 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" containerName="ceilometer-central-agent" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.060247 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.064834 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.065238 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.065422 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.069249 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147062 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147502 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147524 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147544 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147579 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc25t\" (UniqueName: \"kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147657 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147842 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.147884 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249571 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249665 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249704 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249811 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249836 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249863 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249909 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc25t\" (UniqueName: \"kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.249935 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.250876 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.251105 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.258834 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.258927 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.259297 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.261901 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.263260 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.271442 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc25t\" (UniqueName: \"kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t\") pod \"ceilometer-0\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.394801 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.415010 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.849459 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:43 crc kubenswrapper[4679]: W0203 12:27:43.851786 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa391bd_d9a6_4011_adaf_1a9596f14605.slice/crio-31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c WatchSource:0}: Error finding container 31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c: Status 404 returned error can't find the container with id 31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c Feb 03 12:27:43 crc kubenswrapper[4679]: I0203 12:27:43.940054 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerStarted","Data":"31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c"} Feb 03 12:27:44 crc kubenswrapper[4679]: I0203 12:27:44.223197 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fd37ff-b6c7-4448-95bc-bb17b925ecf0" path="/var/lib/kubelet/pods/e1fd37ff-b6c7-4448-95bc-bb17b925ecf0/volumes" Feb 03 12:27:44 crc kubenswrapper[4679]: I0203 12:27:44.951558 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerStarted","Data":"42de09521e2201cbf9d16b1a49ad25963236c2d77264be13b5edf6c22c2fbc03"} Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.475377 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.497162 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.498469 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.600591 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data\") pod \"f412966a-4fb9-4922-840f-99365637f9ac\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.600697 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle\") pod \"f412966a-4fb9-4922-840f-99365637f9ac\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.600734 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9cz4\" (UniqueName: \"kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4\") pod \"f412966a-4fb9-4922-840f-99365637f9ac\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.600752 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs\") pod \"f412966a-4fb9-4922-840f-99365637f9ac\" (UID: \"f412966a-4fb9-4922-840f-99365637f9ac\") " Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.601644 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs" (OuterVolumeSpecName: "logs") pod "f412966a-4fb9-4922-840f-99365637f9ac" (UID: "f412966a-4fb9-4922-840f-99365637f9ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.615455 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4" (OuterVolumeSpecName: "kube-api-access-j9cz4") pod "f412966a-4fb9-4922-840f-99365637f9ac" (UID: "f412966a-4fb9-4922-840f-99365637f9ac"). InnerVolumeSpecName "kube-api-access-j9cz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.633161 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f412966a-4fb9-4922-840f-99365637f9ac" (UID: "f412966a-4fb9-4922-840f-99365637f9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.644644 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data" (OuterVolumeSpecName: "config-data") pod "f412966a-4fb9-4922-840f-99365637f9ac" (UID: "f412966a-4fb9-4922-840f-99365637f9ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.713012 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.713061 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f412966a-4fb9-4922-840f-99365637f9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.713071 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9cz4\" (UniqueName: \"kubernetes.io/projected/f412966a-4fb9-4922-840f-99365637f9ac-kube-api-access-j9cz4\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.713084 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f412966a-4fb9-4922-840f-99365637f9ac-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.961933 4679 generic.go:334] "Generic (PLEG): container finished" podID="f412966a-4fb9-4922-840f-99365637f9ac" containerID="4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939" exitCode=0 Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.962005 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerDied","Data":"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939"} Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.962033 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f412966a-4fb9-4922-840f-99365637f9ac","Type":"ContainerDied","Data":"5a06866819f4d353cb184e5e3fc254450a3ac7bb0b2433434a781391ab0c1ce9"} Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.962031 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.962050 4679 scope.go:117] "RemoveContainer" containerID="4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939" Feb 03 12:27:45 crc kubenswrapper[4679]: I0203 12:27:45.969262 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerStarted","Data":"bd655d9835e55fb5c8dc389c02144757a11df78aefc7954e9f383b0b65780705"} Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.005548 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.077919 4679 scope.go:117] "RemoveContainer" containerID="73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.112197 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.121348 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.140703 4679 scope.go:117] "RemoveContainer" containerID="4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939" Feb 03 12:27:46 crc kubenswrapper[4679]: E0203 12:27:46.141711 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939\": container with ID starting with 4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939 not found: ID does not exist" containerID="4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.141780 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939"} err="failed to get container status \"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939\": rpc error: code = NotFound desc = could not find container \"4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939\": container with ID starting with 4fb18db33e9cd3f8311c40df3d8eb4bedf389e22e29b2cf8f4b2a6456545b939 not found: ID does not exist" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.141821 4679 scope.go:117] "RemoveContainer" containerID="73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707" Feb 03 12:27:46 crc kubenswrapper[4679]: E0203 12:27:46.144256 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707\": container with ID starting with 73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707 not found: ID does not exist" containerID="73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.144304 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707"} err="failed to get container status \"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707\": rpc error: code = NotFound desc = could not find container \"73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707\": container with ID starting with 73c2f8c896c31a5148b8cf703980b4c8166ab9f7a950dcaafa0166e8068ae707 not found: ID does not exist" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.147891 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:46 crc kubenswrapper[4679]: E0203 12:27:46.148538 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-log" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.148565 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-log" Feb 03 12:27:46 crc kubenswrapper[4679]: E0203 12:27:46.148619 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-api" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.148633 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-api" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.148908 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-log" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.148946 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f412966a-4fb9-4922-840f-99365637f9ac" containerName="nova-api-api" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.150249 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.155828 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.156157 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.156179 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.169680 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.235954 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f412966a-4fb9-4922-840f-99365637f9ac" path="/var/lib/kubelet/pods/f412966a-4fb9-4922-840f-99365637f9ac/volumes" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.279279 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-722gz"] Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.281135 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.283692 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.283692 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.289441 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-722gz"] Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324150 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxfm\" (UniqueName: \"kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324273 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324340 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324417 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324462 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.324486 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.426705 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.426767 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.426854 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.426882 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.426960 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxfm\" (UniqueName: \"kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427034 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427079 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427125 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427183 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427222 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh2x8\" (UniqueName: \"kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.427384 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.432863 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.433606 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.434830 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.434928 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.446193 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxfm\" (UniqueName: \"kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm\") pod \"nova-api-0\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.481647 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.528942 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.529064 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.529104 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.529157 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh2x8\" (UniqueName: \"kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.538437 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.544134 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.544919 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.549082 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh2x8\" (UniqueName: \"kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8\") pod \"nova-cell1-cell-mapping-722gz\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.600186 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:46 crc kubenswrapper[4679]: I0203 12:27:46.982662 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:47 crc kubenswrapper[4679]: I0203 12:27:47.003734 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerStarted","Data":"fe3fc7470e1a98184b7765bfcfa9c2cc0f31f06ddda4d8711626fee2017165a0"} Feb 03 12:27:47 crc kubenswrapper[4679]: W0203 12:27:47.029721 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4602bf28_877f_4acc_9137_f0ef0e2709f7.slice/crio-aaab69a485ec13dbed66dd60ba4e49333ad821fe7a276d5e0f628fd455983e04 WatchSource:0}: Error finding container aaab69a485ec13dbed66dd60ba4e49333ad821fe7a276d5e0f628fd455983e04: Status 404 returned error can't find the container with id aaab69a485ec13dbed66dd60ba4e49333ad821fe7a276d5e0f628fd455983e04 Feb 03 12:27:47 crc kubenswrapper[4679]: I0203 12:27:47.167815 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-722gz"] Feb 03 12:27:47 crc kubenswrapper[4679]: W0203 12:27:47.173582 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313bf325_bcb3_47af_9916_3e441aa0754a.slice/crio-2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8 WatchSource:0}: Error finding container 2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8: Status 404 returned error can't find the container with id 2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8 Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.020549 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-722gz" event={"ID":"313bf325-bcb3-47af-9916-3e441aa0754a","Type":"ContainerStarted","Data":"af4e745a5c5fe24439f5af568f671094ec925c4987c5c373c76e844c0e8c5bb8"} Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.020882 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-722gz" event={"ID":"313bf325-bcb3-47af-9916-3e441aa0754a","Type":"ContainerStarted","Data":"2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8"} Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.025383 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerStarted","Data":"4f714a01ca0dbeebb3257a08b627fa98005255060ca5022bdaddc394e0a26f36"} Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.025747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerStarted","Data":"a672d1781e60b38b6732f4b065c5667432b67bc88ed2578f057fdff31e436564"} Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.025765 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerStarted","Data":"aaab69a485ec13dbed66dd60ba4e49333ad821fe7a276d5e0f628fd455983e04"} Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.048109 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-722gz" podStartSLOduration=2.048092298 podStartE2EDuration="2.048092298s" podCreationTimestamp="2026-02-03 12:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:48.037249956 +0000 UTC m=+1340.512146054" watchObservedRunningTime="2026-02-03 12:27:48.048092298 +0000 UTC m=+1340.522988386" Feb 03 12:27:48 crc kubenswrapper[4679]: I0203 12:27:48.071263 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.071239646 podStartE2EDuration="2.071239646s" podCreationTimestamp="2026-02-03 12:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:48.066301276 +0000 UTC m=+1340.541197374" watchObservedRunningTime="2026-02-03 12:27:48.071239646 +0000 UTC m=+1340.546135734" Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.037144 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerStarted","Data":"6a77ff6260ecd9ed74cf9e16f8862856ca0ec94b4b2b15752152da588fa30539"} Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.038632 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.038266 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="proxy-httpd" containerID="cri-o://6a77ff6260ecd9ed74cf9e16f8862856ca0ec94b4b2b15752152da588fa30539" gracePeriod=30 Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.038374 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-notification-agent" containerID="cri-o://bd655d9835e55fb5c8dc389c02144757a11df78aefc7954e9f383b0b65780705" gracePeriod=30 Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.038387 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="sg-core" containerID="cri-o://fe3fc7470e1a98184b7765bfcfa9c2cc0f31f06ddda4d8711626fee2017165a0" gracePeriod=30 Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.037420 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-central-agent" containerID="cri-o://42de09521e2201cbf9d16b1a49ad25963236c2d77264be13b5edf6c22c2fbc03" gracePeriod=30 Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.070684 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.733527385 podStartE2EDuration="7.07066325s" podCreationTimestamp="2026-02-03 12:27:42 +0000 UTC" firstStartedPulling="2026-02-03 12:27:43.854395629 +0000 UTC m=+1336.329291717" lastFinishedPulling="2026-02-03 12:27:48.191531494 +0000 UTC m=+1340.666427582" observedRunningTime="2026-02-03 12:27:49.065384892 +0000 UTC m=+1341.540280980" watchObservedRunningTime="2026-02-03 12:27:49.07066325 +0000 UTC m=+1341.545559338" Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.378726 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.487680 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.489303 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="dnsmasq-dns" containerID="cri-o://34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5" gracePeriod=10 Feb 03 12:27:49 crc kubenswrapper[4679]: I0203 12:27:49.992425 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052405 4679 generic.go:334] "Generic (PLEG): container finished" podID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerID="6a77ff6260ecd9ed74cf9e16f8862856ca0ec94b4b2b15752152da588fa30539" exitCode=0 Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052436 4679 generic.go:334] "Generic (PLEG): container finished" podID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerID="fe3fc7470e1a98184b7765bfcfa9c2cc0f31f06ddda4d8711626fee2017165a0" exitCode=2 Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052445 4679 generic.go:334] "Generic (PLEG): container finished" podID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerID="bd655d9835e55fb5c8dc389c02144757a11df78aefc7954e9f383b0b65780705" exitCode=0 Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052485 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerDied","Data":"6a77ff6260ecd9ed74cf9e16f8862856ca0ec94b4b2b15752152da588fa30539"} Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052513 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerDied","Data":"fe3fc7470e1a98184b7765bfcfa9c2cc0f31f06ddda4d8711626fee2017165a0"} Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.052523 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerDied","Data":"bd655d9835e55fb5c8dc389c02144757a11df78aefc7954e9f383b0b65780705"} Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.056290 4679 generic.go:334] "Generic (PLEG): container finished" podID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerID="34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5" exitCode=0 Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.056337 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" event={"ID":"26335cab-653d-46b0-97a2-a8b4ba9ebdcc","Type":"ContainerDied","Data":"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5"} Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.056387 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.056404 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-fdmfq" event={"ID":"26335cab-653d-46b0-97a2-a8b4ba9ebdcc","Type":"ContainerDied","Data":"454a2852db4fad4c0ea6e2c3665cac7575cf10b3988f66d0b0f6d83a24087078"} Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.056425 4679 scope.go:117] "RemoveContainer" containerID="34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.105853 4679 scope.go:117] "RemoveContainer" containerID="46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108168 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108308 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108430 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108543 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ljjz\" (UniqueName: \"kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108581 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.108609 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config\") pod \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\" (UID: \"26335cab-653d-46b0-97a2-a8b4ba9ebdcc\") " Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.120633 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz" (OuterVolumeSpecName: "kube-api-access-4ljjz") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "kube-api-access-4ljjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.147759 4679 scope.go:117] "RemoveContainer" containerID="34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5" Feb 03 12:27:50 crc kubenswrapper[4679]: E0203 12:27:50.148272 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5\": container with ID starting with 34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5 not found: ID does not exist" containerID="34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.148325 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5"} err="failed to get container status \"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5\": rpc error: code = NotFound desc = could not find container \"34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5\": container with ID starting with 34f126d3490cf1e8a1e885cf5393717f7c8f91386533af25099b1e8bbb4b32a5 not found: ID does not exist" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.148371 4679 scope.go:117] "RemoveContainer" containerID="46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a" Feb 03 12:27:50 crc kubenswrapper[4679]: E0203 12:27:50.148626 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a\": container with ID starting with 46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a not found: ID does not exist" containerID="46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.148844 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a"} err="failed to get container status \"46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a\": rpc error: code = NotFound desc = could not find container \"46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a\": container with ID starting with 46ccd9eac598086d43a06b838356ac264a91b3af50c2a3ec73afd161e9b9e52a not found: ID does not exist" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.180274 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.181931 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.185042 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.188187 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.192688 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config" (OuterVolumeSpecName: "config") pod "26335cab-653d-46b0-97a2-a8b4ba9ebdcc" (UID: "26335cab-653d-46b0-97a2-a8b4ba9ebdcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.210967 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.211029 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.211044 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.211057 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ljjz\" (UniqueName: \"kubernetes.io/projected/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-kube-api-access-4ljjz\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.211070 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.211082 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26335cab-653d-46b0-97a2-a8b4ba9ebdcc-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.381760 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:27:50 crc kubenswrapper[4679]: I0203 12:27:50.391243 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-fdmfq"] Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.097711 4679 generic.go:334] "Generic (PLEG): container finished" podID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerID="42de09521e2201cbf9d16b1a49ad25963236c2d77264be13b5edf6c22c2fbc03" exitCode=0 Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.097822 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerDied","Data":"42de09521e2201cbf9d16b1a49ad25963236c2d77264be13b5edf6c22c2fbc03"} Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.099030 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa391bd-d9a6-4011-adaf-1a9596f14605","Type":"ContainerDied","Data":"31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c"} Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.099137 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31db735cba3e7c8129338320062c65ab52a1974b2ba3f6b9f8b7729f00bbce4c" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.159043 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.225133 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" path="/var/lib/kubelet/pods/26335cab-653d-46b0-97a2-a8b4ba9ebdcc/volumes" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277607 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277670 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277702 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277759 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277814 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277871 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc25t\" (UniqueName: \"kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277903 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.277990 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd\") pod \"4fa391bd-d9a6-4011-adaf-1a9596f14605\" (UID: \"4fa391bd-d9a6-4011-adaf-1a9596f14605\") " Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.278960 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.279075 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.285117 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts" (OuterVolumeSpecName: "scripts") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.285468 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t" (OuterVolumeSpecName: "kube-api-access-rc25t") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "kube-api-access-rc25t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.316194 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.356308 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.370490 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.380814 4679 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381668 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381760 4679 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381819 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381877 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc25t\" (UniqueName: \"kubernetes.io/projected/4fa391bd-d9a6-4011-adaf-1a9596f14605-kube-api-access-rc25t\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381934 4679 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.381995 4679 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa391bd-d9a6-4011-adaf-1a9596f14605-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.399952 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data" (OuterVolumeSpecName: "config-data") pod "4fa391bd-d9a6-4011-adaf-1a9596f14605" (UID: "4fa391bd-d9a6-4011-adaf-1a9596f14605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:52 crc kubenswrapper[4679]: I0203 12:27:52.484543 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa391bd-d9a6-4011-adaf-1a9596f14605-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.120768 4679 generic.go:334] "Generic (PLEG): container finished" podID="313bf325-bcb3-47af-9916-3e441aa0754a" containerID="af4e745a5c5fe24439f5af568f671094ec925c4987c5c373c76e844c0e8c5bb8" exitCode=0 Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.120936 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-722gz" event={"ID":"313bf325-bcb3-47af-9916-3e441aa0754a","Type":"ContainerDied","Data":"af4e745a5c5fe24439f5af568f671094ec925c4987c5c373c76e844c0e8c5bb8"} Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.122527 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.189077 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.201616 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.217489 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218084 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="sg-core" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218114 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="sg-core" Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218140 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-central-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218151 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-central-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218192 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="proxy-httpd" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218202 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="proxy-httpd" Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218224 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="init" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218236 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="init" Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218265 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-notification-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218276 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-notification-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: E0203 12:27:53.218298 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="dnsmasq-dns" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218311 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="dnsmasq-dns" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218603 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-notification-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218652 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="sg-core" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218678 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="ceilometer-central-agent" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218695 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="26335cab-653d-46b0-97a2-a8b4ba9ebdcc" containerName="dnsmasq-dns" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.218707 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" containerName="proxy-httpd" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.223520 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.226709 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.226968 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.227175 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.231400 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.304959 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305063 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-log-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305115 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305150 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-config-data\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305197 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-scripts\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305272 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq48g\" (UniqueName: \"kubernetes.io/projected/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-kube-api-access-hq48g\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305324 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.305347 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-run-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.407718 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-scripts\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.408254 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq48g\" (UniqueName: \"kubernetes.io/projected/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-kube-api-access-hq48g\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.409223 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.410120 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-run-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.410825 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.411040 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-log-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.411262 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.411452 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-config-data\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.411840 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-log-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.411553 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-run-httpd\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.415102 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.419236 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.427858 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-scripts\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.431672 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.432534 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-config-data\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.433469 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq48g\" (UniqueName: \"kubernetes.io/projected/3c9a97ad-868b-4b32-b200-ee3cb3ad9098-kube-api-access-hq48g\") pod \"ceilometer-0\" (UID: \"3c9a97ad-868b-4b32-b200-ee3cb3ad9098\") " pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.543253 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 12:27:53 crc kubenswrapper[4679]: I0203 12:27:53.996718 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 12:27:53 crc kubenswrapper[4679]: W0203 12:27:53.997525 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9a97ad_868b_4b32_b200_ee3cb3ad9098.slice/crio-ee498b858d0131754621f952b5bfd35ca02951d0ab8c888d1d63734cbdb8ea0c WatchSource:0}: Error finding container ee498b858d0131754621f952b5bfd35ca02951d0ab8c888d1d63734cbdb8ea0c: Status 404 returned error can't find the container with id ee498b858d0131754621f952b5bfd35ca02951d0ab8c888d1d63734cbdb8ea0c Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.134319 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c9a97ad-868b-4b32-b200-ee3cb3ad9098","Type":"ContainerStarted","Data":"ee498b858d0131754621f952b5bfd35ca02951d0ab8c888d1d63734cbdb8ea0c"} Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.239769 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa391bd-d9a6-4011-adaf-1a9596f14605" path="/var/lib/kubelet/pods/4fa391bd-d9a6-4011-adaf-1a9596f14605/volumes" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.437493 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.542224 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data\") pod \"313bf325-bcb3-47af-9916-3e441aa0754a\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.542446 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle\") pod \"313bf325-bcb3-47af-9916-3e441aa0754a\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.542500 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts\") pod \"313bf325-bcb3-47af-9916-3e441aa0754a\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.542533 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh2x8\" (UniqueName: \"kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8\") pod \"313bf325-bcb3-47af-9916-3e441aa0754a\" (UID: \"313bf325-bcb3-47af-9916-3e441aa0754a\") " Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.546752 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts" (OuterVolumeSpecName: "scripts") pod "313bf325-bcb3-47af-9916-3e441aa0754a" (UID: "313bf325-bcb3-47af-9916-3e441aa0754a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.547984 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8" (OuterVolumeSpecName: "kube-api-access-wh2x8") pod "313bf325-bcb3-47af-9916-3e441aa0754a" (UID: "313bf325-bcb3-47af-9916-3e441aa0754a"). InnerVolumeSpecName "kube-api-access-wh2x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.574034 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data" (OuterVolumeSpecName: "config-data") pod "313bf325-bcb3-47af-9916-3e441aa0754a" (UID: "313bf325-bcb3-47af-9916-3e441aa0754a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.576800 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "313bf325-bcb3-47af-9916-3e441aa0754a" (UID: "313bf325-bcb3-47af-9916-3e441aa0754a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.645264 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.645299 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.645316 4679 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/313bf325-bcb3-47af-9916-3e441aa0754a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:54 crc kubenswrapper[4679]: I0203 12:27:54.645328 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh2x8\" (UniqueName: \"kubernetes.io/projected/313bf325-bcb3-47af-9916-3e441aa0754a-kube-api-access-wh2x8\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.149024 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c9a97ad-868b-4b32-b200-ee3cb3ad9098","Type":"ContainerStarted","Data":"463c6544bb029867874687035b73918dd9f02f4882965c23f794e53b74253aba"} Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.151168 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-722gz" event={"ID":"313bf325-bcb3-47af-9916-3e441aa0754a","Type":"ContainerDied","Data":"2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8"} Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.151200 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2190e5416661d0346d66e0abbe4f5feff6a4cac7b2cb98915d29787306e243d8" Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.151332 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-722gz" Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.332750 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.333318 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-log" containerID="cri-o://a672d1781e60b38b6732f4b065c5667432b67bc88ed2578f057fdff31e436564" gracePeriod=30 Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.333426 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-api" containerID="cri-o://4f714a01ca0dbeebb3257a08b627fa98005255060ca5022bdaddc394e0a26f36" gracePeriod=30 Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.369422 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.370222 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerName="nova-scheduler-scheduler" containerID="cri-o://b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" gracePeriod=30 Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.385186 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.385565 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" containerID="cri-o://e6d5560b9cf46d3d95f66c3555d739c3c59a6eafea3d02913d6e8e8342244cec" gracePeriod=30 Feb 03 12:27:55 crc kubenswrapper[4679]: I0203 12:27:55.386203 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" containerID="cri-o://19f783f3bb0e53b9312493e4a6fe7224a0a7f436d01360361631f1fe4f307ced" gracePeriod=30 Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.181594 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c9a97ad-868b-4b32-b200-ee3cb3ad9098","Type":"ContainerStarted","Data":"6b64b19945566e623c63e84490fa62726ccf783f87759df02541242c0642fb53"} Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.186548 4679 generic.go:334] "Generic (PLEG): container finished" podID="9fa29abf-007c-4d67-be39-3289d67a125d" containerID="e6d5560b9cf46d3d95f66c3555d739c3c59a6eafea3d02913d6e8e8342244cec" exitCode=143 Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.186602 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerDied","Data":"e6d5560b9cf46d3d95f66c3555d739c3c59a6eafea3d02913d6e8e8342244cec"} Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.192601 4679 generic.go:334] "Generic (PLEG): container finished" podID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerID="4f714a01ca0dbeebb3257a08b627fa98005255060ca5022bdaddc394e0a26f36" exitCode=0 Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.192632 4679 generic.go:334] "Generic (PLEG): container finished" podID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerID="a672d1781e60b38b6732f4b065c5667432b67bc88ed2578f057fdff31e436564" exitCode=143 Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.192653 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerDied","Data":"4f714a01ca0dbeebb3257a08b627fa98005255060ca5022bdaddc394e0a26f36"} Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.192678 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerDied","Data":"a672d1781e60b38b6732f4b065c5667432b67bc88ed2578f057fdff31e436564"} Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.306256 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388041 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388309 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388417 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388599 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388692 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.388781 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxfm\" (UniqueName: \"kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm\") pod \"4602bf28-877f-4acc-9137-f0ef0e2709f7\" (UID: \"4602bf28-877f-4acc-9137-f0ef0e2709f7\") " Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.390611 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs" (OuterVolumeSpecName: "logs") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.395604 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm" (OuterVolumeSpecName: "kube-api-access-qxxfm") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "kube-api-access-qxxfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.422698 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.424647 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data" (OuterVolumeSpecName: "config-data") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.448592 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.448922 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4602bf28-877f-4acc-9137-f0ef0e2709f7" (UID: "4602bf28-877f-4acc-9137-f0ef0e2709f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491551 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4602bf28-877f-4acc-9137-f0ef0e2709f7-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491586 4679 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491596 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxfm\" (UniqueName: \"kubernetes.io/projected/4602bf28-877f-4acc-9137-f0ef0e2709f7-kube-api-access-qxxfm\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491606 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491614 4679 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: I0203 12:27:56.491624 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4602bf28-877f-4acc-9137-f0ef0e2709f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:56 crc kubenswrapper[4679]: E0203 12:27:56.945487 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:56 crc kubenswrapper[4679]: E0203 12:27:56.947199 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:56 crc kubenswrapper[4679]: E0203 12:27:56.952710 4679 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 12:27:56 crc kubenswrapper[4679]: E0203 12:27:56.952783 4679 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerName="nova-scheduler-scheduler" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.205541 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4602bf28-877f-4acc-9137-f0ef0e2709f7","Type":"ContainerDied","Data":"aaab69a485ec13dbed66dd60ba4e49333ad821fe7a276d5e0f628fd455983e04"} Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.205594 4679 scope.go:117] "RemoveContainer" containerID="4f714a01ca0dbeebb3257a08b627fa98005255060ca5022bdaddc394e0a26f36" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.205711 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.209967 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c9a97ad-868b-4b32-b200-ee3cb3ad9098","Type":"ContainerStarted","Data":"6b79e679aa4947fa08a95a6ee7a13b0b2c7557264497b19dbc9c68fe9d96e870"} Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.245344 4679 scope.go:117] "RemoveContainer" containerID="a672d1781e60b38b6732f4b065c5667432b67bc88ed2578f057fdff31e436564" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.248523 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.266996 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281064 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:57 crc kubenswrapper[4679]: E0203 12:27:57.281535 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313bf325-bcb3-47af-9916-3e441aa0754a" containerName="nova-manage" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281556 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="313bf325-bcb3-47af-9916-3e441aa0754a" containerName="nova-manage" Feb 03 12:27:57 crc kubenswrapper[4679]: E0203 12:27:57.281580 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-api" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281588 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-api" Feb 03 12:27:57 crc kubenswrapper[4679]: E0203 12:27:57.281601 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-log" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281607 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-log" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281784 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-log" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281800 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" containerName="nova-api-api" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.281813 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="313bf325-bcb3-47af-9916-3e441aa0754a" containerName="nova-manage" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.282840 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.285878 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.286077 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.286335 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.314304 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.408811 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0144e14a-b09d-4182-8008-358b3032b05c-logs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.408890 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wrr\" (UniqueName: \"kubernetes.io/projected/0144e14a-b09d-4182-8008-358b3032b05c-kube-api-access-72wrr\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.409005 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-public-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.409030 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.409108 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-config-data\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.409146 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511115 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-public-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511179 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511215 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-config-data\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511239 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511402 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0144e14a-b09d-4182-8008-358b3032b05c-logs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.511454 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wrr\" (UniqueName: \"kubernetes.io/projected/0144e14a-b09d-4182-8008-358b3032b05c-kube-api-access-72wrr\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.512160 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0144e14a-b09d-4182-8008-358b3032b05c-logs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.516634 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-public-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.517297 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.517995 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.526099 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0144e14a-b09d-4182-8008-358b3032b05c-config-data\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.532921 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wrr\" (UniqueName: \"kubernetes.io/projected/0144e14a-b09d-4182-8008-358b3032b05c-kube-api-access-72wrr\") pod \"nova-api-0\" (UID: \"0144e14a-b09d-4182-8008-358b3032b05c\") " pod="openstack/nova-api-0" Feb 03 12:27:57 crc kubenswrapper[4679]: I0203 12:27:57.631664 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 12:27:58 crc kubenswrapper[4679]: I0203 12:27:58.120070 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 12:27:58 crc kubenswrapper[4679]: W0203 12:27:58.126573 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0144e14a_b09d_4182_8008_358b3032b05c.slice/crio-49d1b9c930daf6c864160473fc31d1f38823f7e52a391f7d029bd7624b151ad7 WatchSource:0}: Error finding container 49d1b9c930daf6c864160473fc31d1f38823f7e52a391f7d029bd7624b151ad7: Status 404 returned error can't find the container with id 49d1b9c930daf6c864160473fc31d1f38823f7e52a391f7d029bd7624b151ad7 Feb 03 12:27:58 crc kubenswrapper[4679]: I0203 12:27:58.231109 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4602bf28-877f-4acc-9137-f0ef0e2709f7" path="/var/lib/kubelet/pods/4602bf28-877f-4acc-9137-f0ef0e2709f7/volumes" Feb 03 12:27:58 crc kubenswrapper[4679]: I0203 12:27:58.232998 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0144e14a-b09d-4182-8008-358b3032b05c","Type":"ContainerStarted","Data":"49d1b9c930daf6c864160473fc31d1f38823f7e52a391f7d029bd7624b151ad7"} Feb 03 12:27:58 crc kubenswrapper[4679]: I0203 12:27:58.826741 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:60916->10.217.0.195:8775: read: connection reset by peer" Feb 03 12:27:58 crc kubenswrapper[4679]: I0203 12:27:58.826801 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:60914->10.217.0.195:8775: read: connection reset by peer" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.239754 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c9a97ad-868b-4b32-b200-ee3cb3ad9098","Type":"ContainerStarted","Data":"33d23f53e01da83f28195314d280b100bf6788a0fe61e52fb869dc365e59f34d"} Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.239885 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.241712 4679 generic.go:334] "Generic (PLEG): container finished" podID="9fa29abf-007c-4d67-be39-3289d67a125d" containerID="19f783f3bb0e53b9312493e4a6fe7224a0a7f436d01360361631f1fe4f307ced" exitCode=0 Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.241781 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerDied","Data":"19f783f3bb0e53b9312493e4a6fe7224a0a7f436d01360361631f1fe4f307ced"} Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.241822 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9fa29abf-007c-4d67-be39-3289d67a125d","Type":"ContainerDied","Data":"ff21d31c421b4b1dad2963838c8dcb6a699fced6a7257a025a05bed8ba72feb3"} Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.241834 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff21d31c421b4b1dad2963838c8dcb6a699fced6a7257a025a05bed8ba72feb3" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.243005 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0144e14a-b09d-4182-8008-358b3032b05c","Type":"ContainerStarted","Data":"4731eac77982cdeabd1764df5d04a36007230cb5beb1c7978e78d27e72056d90"} Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.243027 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0144e14a-b09d-4182-8008-358b3032b05c","Type":"ContainerStarted","Data":"282d789390533ce5f0aea9b23e7dad7a7e0a050879d8c10053d6e1f1e9ee9650"} Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.267599 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.950358967 podStartE2EDuration="6.267571772s" podCreationTimestamp="2026-02-03 12:27:53 +0000 UTC" firstStartedPulling="2026-02-03 12:27:54.000341324 +0000 UTC m=+1346.475237432" lastFinishedPulling="2026-02-03 12:27:58.317554149 +0000 UTC m=+1350.792450237" observedRunningTime="2026-02-03 12:27:59.262987152 +0000 UTC m=+1351.737883250" watchObservedRunningTime="2026-02-03 12:27:59.267571772 +0000 UTC m=+1351.742467860" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.303989 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.303963969 podStartE2EDuration="2.303963969s" podCreationTimestamp="2026-02-03 12:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:27:59.297661987 +0000 UTC m=+1351.772558075" watchObservedRunningTime="2026-02-03 12:27:59.303963969 +0000 UTC m=+1351.778860047" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.312400 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.464087 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle\") pod \"9fa29abf-007c-4d67-be39-3289d67a125d\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.464347 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-284j7\" (UniqueName: \"kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7\") pod \"9fa29abf-007c-4d67-be39-3289d67a125d\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.464429 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data\") pod \"9fa29abf-007c-4d67-be39-3289d67a125d\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.464480 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs\") pod \"9fa29abf-007c-4d67-be39-3289d67a125d\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.464555 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs\") pod \"9fa29abf-007c-4d67-be39-3289d67a125d\" (UID: \"9fa29abf-007c-4d67-be39-3289d67a125d\") " Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.465112 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs" (OuterVolumeSpecName: "logs") pod "9fa29abf-007c-4d67-be39-3289d67a125d" (UID: "9fa29abf-007c-4d67-be39-3289d67a125d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.485736 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7" (OuterVolumeSpecName: "kube-api-access-284j7") pod "9fa29abf-007c-4d67-be39-3289d67a125d" (UID: "9fa29abf-007c-4d67-be39-3289d67a125d"). InnerVolumeSpecName "kube-api-access-284j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.534480 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa29abf-007c-4d67-be39-3289d67a125d" (UID: "9fa29abf-007c-4d67-be39-3289d67a125d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.534746 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data" (OuterVolumeSpecName: "config-data") pod "9fa29abf-007c-4d67-be39-3289d67a125d" (UID: "9fa29abf-007c-4d67-be39-3289d67a125d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.558933 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9fa29abf-007c-4d67-be39-3289d67a125d" (UID: "9fa29abf-007c-4d67-be39-3289d67a125d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.567924 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.567975 4679 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.567993 4679 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fa29abf-007c-4d67-be39-3289d67a125d-logs\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.568006 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa29abf-007c-4d67-be39-3289d67a125d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:27:59 crc kubenswrapper[4679]: I0203 12:27:59.568021 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-284j7\" (UniqueName: \"kubernetes.io/projected/9fa29abf-007c-4d67-be39-3289d67a125d-kube-api-access-284j7\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.253577 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.285629 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.311170 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.329433 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:28:00 crc kubenswrapper[4679]: E0203 12:28:00.329887 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.329905 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" Feb 03 12:28:00 crc kubenswrapper[4679]: E0203 12:28:00.329923 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.329930 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.330133 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-metadata" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.330161 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" containerName="nova-metadata-log" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.331160 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.335059 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.335324 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 12:28:00 crc kubenswrapper[4679]: E0203 12:28:00.354977 4679 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fa29abf_007c_4d67_be39_3289d67a125d.slice/crio-ff21d31c421b4b1dad2963838c8dcb6a699fced6a7257a025a05bed8ba72feb3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fa29abf_007c_4d67_be39_3289d67a125d.slice\": RecentStats: unable to find data in memory cache]" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.371418 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.489680 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-config-data\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.490652 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.490947 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.491110 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrvx\" (UniqueName: \"kubernetes.io/projected/43e2a214-af77-4834-9af8-6435c0cc24ba-kube-api-access-4nrvx\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.491368 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43e2a214-af77-4834-9af8-6435c0cc24ba-logs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.594631 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43e2a214-af77-4834-9af8-6435c0cc24ba-logs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.594706 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-config-data\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.594746 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.594842 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.594863 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrvx\" (UniqueName: \"kubernetes.io/projected/43e2a214-af77-4834-9af8-6435c0cc24ba-kube-api-access-4nrvx\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.595793 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43e2a214-af77-4834-9af8-6435c0cc24ba-logs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.602212 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.602845 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-config-data\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.609049 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e2a214-af77-4834-9af8-6435c0cc24ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.615836 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrvx\" (UniqueName: \"kubernetes.io/projected/43e2a214-af77-4834-9af8-6435c0cc24ba-kube-api-access-4nrvx\") pod \"nova-metadata-0\" (UID: \"43e2a214-af77-4834-9af8-6435c0cc24ba\") " pod="openstack/nova-metadata-0" Feb 03 12:28:00 crc kubenswrapper[4679]: I0203 12:28:00.660511 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.186818 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 12:28:01 crc kubenswrapper[4679]: W0203 12:28:01.188120 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e2a214_af77_4834_9af8_6435c0cc24ba.slice/crio-f9115813aa2c5cb769d72b4e36a5f19b787933296458a8c23fe047e6abce7be7 WatchSource:0}: Error finding container f9115813aa2c5cb769d72b4e36a5f19b787933296458a8c23fe047e6abce7be7: Status 404 returned error can't find the container with id f9115813aa2c5cb769d72b4e36a5f19b787933296458a8c23fe047e6abce7be7 Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.266138 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43e2a214-af77-4834-9af8-6435c0cc24ba","Type":"ContainerStarted","Data":"f9115813aa2c5cb769d72b4e36a5f19b787933296458a8c23fe047e6abce7be7"} Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.268666 4679 generic.go:334] "Generic (PLEG): container finished" podID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerID="b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" exitCode=0 Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.268700 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"312934bf-297f-4589-b2cb-8d2abfc3ba2f","Type":"ContainerDied","Data":"b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a"} Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.467686 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.619149 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle\") pod \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.619233 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data\") pod \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.619345 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66g9\" (UniqueName: \"kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9\") pod \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\" (UID: \"312934bf-297f-4589-b2cb-8d2abfc3ba2f\") " Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.625953 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9" (OuterVolumeSpecName: "kube-api-access-p66g9") pod "312934bf-297f-4589-b2cb-8d2abfc3ba2f" (UID: "312934bf-297f-4589-b2cb-8d2abfc3ba2f"). InnerVolumeSpecName "kube-api-access-p66g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.658660 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "312934bf-297f-4589-b2cb-8d2abfc3ba2f" (UID: "312934bf-297f-4589-b2cb-8d2abfc3ba2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.660714 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data" (OuterVolumeSpecName: "config-data") pod "312934bf-297f-4589-b2cb-8d2abfc3ba2f" (UID: "312934bf-297f-4589-b2cb-8d2abfc3ba2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.721902 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.722337 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312934bf-297f-4589-b2cb-8d2abfc3ba2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:01 crc kubenswrapper[4679]: I0203 12:28:01.722350 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p66g9\" (UniqueName: \"kubernetes.io/projected/312934bf-297f-4589-b2cb-8d2abfc3ba2f-kube-api-access-p66g9\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.225863 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa29abf-007c-4d67-be39-3289d67a125d" path="/var/lib/kubelet/pods/9fa29abf-007c-4d67-be39-3289d67a125d/volumes" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.279216 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43e2a214-af77-4834-9af8-6435c0cc24ba","Type":"ContainerStarted","Data":"e1060dfbeab24ef567d9409d5764e8d651b7aeb9a0e40664b6e520920f3e30a7"} Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.279285 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43e2a214-af77-4834-9af8-6435c0cc24ba","Type":"ContainerStarted","Data":"03a52a1f175b25b8b5653238b9f23d02350d7dd2904502c6e553657133172b65"} Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.281428 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"312934bf-297f-4589-b2cb-8d2abfc3ba2f","Type":"ContainerDied","Data":"a0507688a51cd14bbc3df7c3a06b5cc7601d22fa7997ccb6adb825d275bd2512"} Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.281459 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.281487 4679 scope.go:117] "RemoveContainer" containerID="b336a5f8f941e40acfdff4e49f584acef73b7e6766b5b2a7a8ac6adbcbcfe98a" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.318404 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.318385189 podStartE2EDuration="2.318385189s" podCreationTimestamp="2026-02-03 12:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:02.309353812 +0000 UTC m=+1354.784249940" watchObservedRunningTime="2026-02-03 12:28:02.318385189 +0000 UTC m=+1354.793281287" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.357083 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.372839 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.403927 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:28:02 crc kubenswrapper[4679]: E0203 12:28:02.404448 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerName="nova-scheduler-scheduler" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.404470 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerName="nova-scheduler-scheduler" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.404686 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" containerName="nova-scheduler-scheduler" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.405390 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.407744 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.415722 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.539015 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.539185 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-config-data\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.539279 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bhl\" (UniqueName: \"kubernetes.io/projected/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-kube-api-access-f2bhl\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.642183 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-config-data\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.642378 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bhl\" (UniqueName: \"kubernetes.io/projected/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-kube-api-access-f2bhl\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.642490 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.650962 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.666258 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-config-data\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.669781 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bhl\" (UniqueName: \"kubernetes.io/projected/f3933651-b0cd-48e8-bcf4-b6ec20930d3b-kube-api-access-f2bhl\") pod \"nova-scheduler-0\" (UID: \"f3933651-b0cd-48e8-bcf4-b6ec20930d3b\") " pod="openstack/nova-scheduler-0" Feb 03 12:28:02 crc kubenswrapper[4679]: I0203 12:28:02.723979 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 12:28:03 crc kubenswrapper[4679]: I0203 12:28:03.210010 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 12:28:03 crc kubenswrapper[4679]: I0203 12:28:03.293699 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3933651-b0cd-48e8-bcf4-b6ec20930d3b","Type":"ContainerStarted","Data":"e2b1d477ef2ed258dc4f59f5ec63210d331abb14cc71a5b1c6b34ba5350ff8f1"} Feb 03 12:28:04 crc kubenswrapper[4679]: I0203 12:28:04.227757 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312934bf-297f-4589-b2cb-8d2abfc3ba2f" path="/var/lib/kubelet/pods/312934bf-297f-4589-b2cb-8d2abfc3ba2f/volumes" Feb 03 12:28:04 crc kubenswrapper[4679]: I0203 12:28:04.309936 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3933651-b0cd-48e8-bcf4-b6ec20930d3b","Type":"ContainerStarted","Data":"ff458cc67f7904265a47c60f7f2d43a2a36cfb9780019c184135749381761e25"} Feb 03 12:28:04 crc kubenswrapper[4679]: I0203 12:28:04.357672 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.357649882 podStartE2EDuration="2.357649882s" podCreationTimestamp="2026-02-03 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:04.340585101 +0000 UTC m=+1356.815481289" watchObservedRunningTime="2026-02-03 12:28:04.357649882 +0000 UTC m=+1356.832545980" Feb 03 12:28:05 crc kubenswrapper[4679]: I0203 12:28:05.662171 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:28:05 crc kubenswrapper[4679]: I0203 12:28:05.662477 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 12:28:07 crc kubenswrapper[4679]: I0203 12:28:07.632663 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:28:07 crc kubenswrapper[4679]: I0203 12:28:07.632721 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 12:28:07 crc kubenswrapper[4679]: I0203 12:28:07.724679 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 12:28:08 crc kubenswrapper[4679]: I0203 12:28:08.653564 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0144e14a-b09d-4182-8008-358b3032b05c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:28:08 crc kubenswrapper[4679]: I0203 12:28:08.653572 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0144e14a-b09d-4182-8008-358b3032b05c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:28:10 crc kubenswrapper[4679]: I0203 12:28:10.661417 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:28:10 crc kubenswrapper[4679]: I0203 12:28:10.661781 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 12:28:11 crc kubenswrapper[4679]: I0203 12:28:11.675649 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43e2a214-af77-4834-9af8-6435c0cc24ba" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:28:11 crc kubenswrapper[4679]: I0203 12:28:11.675852 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43e2a214-af77-4834-9af8-6435c0cc24ba" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 12:28:12 crc kubenswrapper[4679]: I0203 12:28:12.725848 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 12:28:12 crc kubenswrapper[4679]: I0203 12:28:12.761343 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 12:28:13 crc kubenswrapper[4679]: I0203 12:28:13.430745 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 12:28:17 crc kubenswrapper[4679]: I0203 12:28:17.647240 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:28:17 crc kubenswrapper[4679]: I0203 12:28:17.648737 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:28:17 crc kubenswrapper[4679]: I0203 12:28:17.650625 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 12:28:17 crc kubenswrapper[4679]: I0203 12:28:17.658189 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:28:18 crc kubenswrapper[4679]: I0203 12:28:18.473343 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 12:28:18 crc kubenswrapper[4679]: I0203 12:28:18.479517 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 12:28:20 crc kubenswrapper[4679]: I0203 12:28:20.668080 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:28:20 crc kubenswrapper[4679]: I0203 12:28:20.669122 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 12:28:20 crc kubenswrapper[4679]: I0203 12:28:20.676543 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:28:21 crc kubenswrapper[4679]: I0203 12:28:21.517480 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 12:28:23 crc kubenswrapper[4679]: I0203 12:28:23.554265 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 12:28:33 crc kubenswrapper[4679]: I0203 12:28:33.397130 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:34 crc kubenswrapper[4679]: I0203 12:28:34.335802 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:36 crc kubenswrapper[4679]: I0203 12:28:36.735485 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:28:36 crc kubenswrapper[4679]: I0203 12:28:36.735822 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:28:38 crc kubenswrapper[4679]: I0203 12:28:38.040122 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="rabbitmq" containerID="cri-o://1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8" gracePeriod=604796 Feb 03 12:28:38 crc kubenswrapper[4679]: I0203 12:28:38.999643 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="rabbitmq" containerID="cri-o://c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7" gracePeriod=604796 Feb 03 12:28:43 crc kubenswrapper[4679]: I0203 12:28:43.882916 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.117516 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.119518 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.123991 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.133127 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235273 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235329 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235449 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235480 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235541 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235646 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.235738 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqh7\" (UniqueName: \"kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.336971 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337554 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqh7\" (UniqueName: \"kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337590 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337610 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337670 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337689 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.337756 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338139 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338543 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338627 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338753 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338856 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.338970 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.353597 4679 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 03 12:28:44 crc kubenswrapper[4679]: I0203 12:28:44.368241 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqh7\" (UniqueName: \"kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7\") pod \"dnsmasq-dns-79bd4cc8c9-wnhxv\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.455454 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.684706 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.744397 4679 generic.go:334] "Generic (PLEG): container finished" podID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerID="1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8" exitCode=0 Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.744436 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerDied","Data":"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8"} Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.744476 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.744561 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438272b0-d957-44f7-aa5e-502ce5189f9c","Type":"ContainerDied","Data":"6e08d3d75d65957fde56633b93a5ce5959fba8d07080b88ecf92dbf47bef1654"} Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.744579 4679 scope.go:117] "RemoveContainer" containerID="1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.793556 4679 scope.go:117] "RemoveContainer" containerID="8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.856945 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857047 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42xmw\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857084 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857140 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857230 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857258 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857285 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857321 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857335 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857407 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.857435 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins\") pod \"438272b0-d957-44f7-aa5e-502ce5189f9c\" (UID: \"438272b0-d957-44f7-aa5e-502ce5189f9c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.858449 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.870516 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.870928 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.871423 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.888558 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.895629 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw" (OuterVolumeSpecName: "kube-api-access-42xmw") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "kube-api-access-42xmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.896517 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info" (OuterVolumeSpecName: "pod-info") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.900653 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.908567 4679 scope.go:117] "RemoveContainer" containerID="1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8" Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:44.909521 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8\": container with ID starting with 1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8 not found: ID does not exist" containerID="1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.909571 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8"} err="failed to get container status \"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8\": rpc error: code = NotFound desc = could not find container \"1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8\": container with ID starting with 1437a692db21364484ec96010fd37aa47d9d71ca72df6aad5636e433abb4feb8 not found: ID does not exist" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.909596 4679 scope.go:117] "RemoveContainer" containerID="8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3" Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:44.931614 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3\": container with ID starting with 8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3 not found: ID does not exist" containerID="8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.931669 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3"} err="failed to get container status \"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3\": rpc error: code = NotFound desc = could not find container \"8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3\": container with ID starting with 8e42871f2762aa6145de1ffd03ef2c6f6a62a4febba8cb0c787b0f442a3f3cf3 not found: ID does not exist" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.965994 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42xmw\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-kube-api-access-42xmw\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966041 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966081 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966094 4679 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438272b0-d957-44f7-aa5e-502ce5189f9c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966106 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966119 4679 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966131 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.966142 4679 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438272b0-d957-44f7-aa5e-502ce5189f9c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.995227 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:44.997575 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data" (OuterVolumeSpecName: "config-data") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.010603 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf" (OuterVolumeSpecName: "server-conf") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.067796 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.068245 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.068256 4679 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438272b0-d957-44f7-aa5e-502ce5189f9c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.105662 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "438272b0-d957-44f7-aa5e-502ce5189f9c" (UID: "438272b0-d957-44f7-aa5e-502ce5189f9c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.170174 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438272b0-d957-44f7-aa5e-502ce5189f9c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.392157 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.427545 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.443890 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:45.444393 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="rabbitmq" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.444409 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="rabbitmq" Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:45.444452 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="setup-container" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.444460 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="setup-container" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.444702 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" containerName="rabbitmq" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.449853 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.454800 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.454977 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xf7k9" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.455121 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.455243 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.455352 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.455487 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.455588 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.494168 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.534930 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582027 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582101 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891b9bf5-a68a-4118-a002-3b74879fac0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582173 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582340 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582423 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582472 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nkq\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-kube-api-access-q8nkq\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582531 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582568 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582643 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582685 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.582713 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891b9bf5-a68a-4118-a002-3b74879fac0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.684593 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.684662 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891b9bf5-a68a-4118-a002-3b74879fac0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.684754 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.684812 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891b9bf5-a68a-4118-a002-3b74879fac0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.685263 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.685487 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.685531 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.686328 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nkq\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-kube-api-access-q8nkq\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.686439 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.686480 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.686583 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.687174 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.687476 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.688337 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.688991 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.689349 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.690345 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/891b9bf5-a68a-4118-a002-3b74879fac0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.693742 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.693897 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/891b9bf5-a68a-4118-a002-3b74879fac0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.696618 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/891b9bf5-a68a-4118-a002-3b74879fac0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.698797 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.707619 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nkq\" (UniqueName: \"kubernetes.io/projected/891b9bf5-a68a-4118-a002-3b74879fac0b-kube-api-access-q8nkq\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.708753 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.808385 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809508 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809564 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809591 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809637 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809658 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809724 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809764 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.809825 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.821379 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.823055 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.823815 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.827327 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" event={"ID":"9bd38952-8f83-42a1-9493-e1c12640c8bb","Type":"ContainerStarted","Data":"f25834db02291bb453918d023d6345d00b6c5586650c0198940ec78f34a3fb02"} Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.827957 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.827984 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.828000 4679 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.829462 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.837817 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld" (OuterVolumeSpecName: "kube-api-access-4zzld") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "kube-api-access-4zzld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838646 4679 generic.go:334] "Generic (PLEG): container finished" podID="73f156fc-e458-470c-ad7b-24125be5762c" containerID="c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7" exitCode=0 Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838684 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerDied","Data":"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7"} Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838712 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73f156fc-e458-470c-ad7b-24125be5762c","Type":"ContainerDied","Data":"0a6a83f46807df55ee01d13620f8aa3126b6937bdbaa9b773f33e2b28da1bf39"} Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838729 4679 scope.go:117] "RemoveContainer" containerID="c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838947 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.838995 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info" (OuterVolumeSpecName: "pod-info") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.840503 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.862296 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"891b9bf5-a68a-4118-a002-3b74879fac0b\") " pod="openstack/rabbitmq-server-0" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.882879 4679 scope.go:117] "RemoveContainer" containerID="c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.903330 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data" (OuterVolumeSpecName: "config-data") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.923477 4679 scope.go:117] "RemoveContainer" containerID="c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.929253 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.929544 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret\") pod \"73f156fc-e458-470c-ad7b-24125be5762c\" (UID: \"73f156fc-e458-470c-ad7b-24125be5762c\") " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.930111 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.930140 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.930154 4679 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73f156fc-e458-470c-ad7b-24125be5762c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.930167 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-kube-api-access-4zzld\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.930204 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.934617 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:45.938898 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7\": container with ID starting with c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7 not found: ID does not exist" containerID="c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.938952 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7"} err="failed to get container status \"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7\": rpc error: code = NotFound desc = could not find container \"c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7\": container with ID starting with c30bcddd9d55ed1e823fcc6d87d2cd24dad84d03d5562c3347cd4cbb85f5bde7 not found: ID does not exist" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.938979 4679 scope.go:117] "RemoveContainer" containerID="c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934" Feb 03 12:28:45 crc kubenswrapper[4679]: E0203 12:28:45.939330 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934\": container with ID starting with c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934 not found: ID does not exist" containerID="c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.939397 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934"} err="failed to get container status \"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934\": rpc error: code = NotFound desc = could not find container \"c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934\": container with ID starting with c174dc53374f58207b8a3195fbf579018e9c1f00885b0698c8b537593f48d934 not found: ID does not exist" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.953250 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.977974 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:45 crc kubenswrapper[4679]: I0203 12:28:45.984541 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf" (OuterVolumeSpecName: "server-conf") pod "73f156fc-e458-470c-ad7b-24125be5762c" (UID: "73f156fc-e458-470c-ad7b-24125be5762c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.031314 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.031447 4679 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73f156fc-e458-470c-ad7b-24125be5762c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.031461 4679 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73f156fc-e458-470c-ad7b-24125be5762c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.031469 4679 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73f156fc-e458-470c-ad7b-24125be5762c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.105620 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.179587 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.189478 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.224330 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438272b0-d957-44f7-aa5e-502ce5189f9c" path="/var/lib/kubelet/pods/438272b0-d957-44f7-aa5e-502ce5189f9c/volumes" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.225347 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f156fc-e458-470c-ad7b-24125be5762c" path="/var/lib/kubelet/pods/73f156fc-e458-470c-ad7b-24125be5762c/volumes" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.226098 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:46 crc kubenswrapper[4679]: E0203 12:28:46.226518 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="rabbitmq" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.226540 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="rabbitmq" Feb 03 12:28:46 crc kubenswrapper[4679]: E0203 12:28:46.226584 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="setup-container" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.226594 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="setup-container" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.226849 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="73f156fc-e458-470c-ad7b-24125be5762c" containerName="rabbitmq" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.228682 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.228805 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.231920 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.232157 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.232432 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.232618 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.232787 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-crtc8" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.233502 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.233655 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337752 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qvj\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-kube-api-access-n8qvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337812 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337868 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/666e9640-9258-44a6-980d-e79d1dc7f2b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337896 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337933 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337953 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.337969 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.338013 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/666e9640-9258-44a6-980d-e79d1dc7f2b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.338029 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.338059 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.338074 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440289 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440382 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440545 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qvj\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-kube-api-access-n8qvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440572 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440597 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/666e9640-9258-44a6-980d-e79d1dc7f2b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.440626 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441004 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441032 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441296 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441576 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441035 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441619 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441682 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/666e9640-9258-44a6-980d-e79d1dc7f2b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.441705 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.442451 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.442739 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.442965 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/666e9640-9258-44a6-980d-e79d1dc7f2b3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.447440 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/666e9640-9258-44a6-980d-e79d1dc7f2b3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.452056 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/666e9640-9258-44a6-980d-e79d1dc7f2b3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.452892 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.458962 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.461950 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qvj\" (UniqueName: \"kubernetes.io/projected/666e9640-9258-44a6-980d-e79d1dc7f2b3-kube-api-access-n8qvj\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.476912 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"666e9640-9258-44a6-980d-e79d1dc7f2b3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.559458 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.662228 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 12:28:46 crc kubenswrapper[4679]: W0203 12:28:46.686508 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod891b9bf5_a68a_4118_a002_3b74879fac0b.slice/crio-49e8f8bb5f2a80917b956afef1a6816fe83f4f5e62d4b373177db6d7203dd5d0 WatchSource:0}: Error finding container 49e8f8bb5f2a80917b956afef1a6816fe83f4f5e62d4b373177db6d7203dd5d0: Status 404 returned error can't find the container with id 49e8f8bb5f2a80917b956afef1a6816fe83f4f5e62d4b373177db6d7203dd5d0 Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.861847 4679 generic.go:334] "Generic (PLEG): container finished" podID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerID="bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc" exitCode=0 Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.861945 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" event={"ID":"9bd38952-8f83-42a1-9493-e1c12640c8bb","Type":"ContainerDied","Data":"bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc"} Feb 03 12:28:46 crc kubenswrapper[4679]: I0203 12:28:46.869318 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"891b9bf5-a68a-4118-a002-3b74879fac0b","Type":"ContainerStarted","Data":"49e8f8bb5f2a80917b956afef1a6816fe83f4f5e62d4b373177db6d7203dd5d0"} Feb 03 12:28:47 crc kubenswrapper[4679]: I0203 12:28:47.067108 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 12:28:47 crc kubenswrapper[4679]: I0203 12:28:47.881879 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" event={"ID":"9bd38952-8f83-42a1-9493-e1c12640c8bb","Type":"ContainerStarted","Data":"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4"} Feb 03 12:28:47 crc kubenswrapper[4679]: I0203 12:28:47.882265 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:47 crc kubenswrapper[4679]: I0203 12:28:47.883405 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"666e9640-9258-44a6-980d-e79d1dc7f2b3","Type":"ContainerStarted","Data":"9ca7c953ec6b981a0722b1d581c0cf36e111698542850a76f629d6abb18c2142"} Feb 03 12:28:47 crc kubenswrapper[4679]: I0203 12:28:47.921564 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" podStartSLOduration=3.9215456250000003 podStartE2EDuration="3.921545625s" podCreationTimestamp="2026-02-03 12:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:47.908157493 +0000 UTC m=+1400.383053581" watchObservedRunningTime="2026-02-03 12:28:47.921545625 +0000 UTC m=+1400.396441713" Feb 03 12:28:48 crc kubenswrapper[4679]: I0203 12:28:48.900003 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"891b9bf5-a68a-4118-a002-3b74879fac0b","Type":"ContainerStarted","Data":"95957a4085630c45894c9bb2b2e44488fdd31ef8b5883e00ad78411c05ba0da8"} Feb 03 12:28:48 crc kubenswrapper[4679]: I0203 12:28:48.908833 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"666e9640-9258-44a6-980d-e79d1dc7f2b3","Type":"ContainerStarted","Data":"9e293b5eed3e3d7e19a13e82a4d2522a3244c39c8fac1e0616fa806f110999a3"} Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.457776 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.541863 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.542415 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="dnsmasq-dns" containerID="cri-o://30e2a26710b5a1fd73bcd8655d3892ed0d02e49dda3974067958ba91b005f017" gracePeriod=10 Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.701099 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-bgkzg"] Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.708300 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.739550 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-bgkzg"] Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812056 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-svc\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812145 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812183 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppprl\" (UniqueName: \"kubernetes.io/projected/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-kube-api-access-ppprl\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812247 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812322 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-config\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812405 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.812470 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915034 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-svc\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915528 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915572 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppprl\" (UniqueName: \"kubernetes.io/projected/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-kube-api-access-ppprl\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915652 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915719 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-config\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915789 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.915846 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.916989 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.917028 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.917131 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-config\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.917183 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.917863 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.917880 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-dns-svc\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.949721 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppprl\" (UniqueName: \"kubernetes.io/projected/d3a4bf4d-7cf5-4026-acdf-53345ca82af1-kube-api-access-ppprl\") pod \"dnsmasq-dns-55478c4467-bgkzg\" (UID: \"d3a4bf4d-7cf5-4026-acdf-53345ca82af1\") " pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.968126 4679 generic.go:334] "Generic (PLEG): container finished" podID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerID="30e2a26710b5a1fd73bcd8655d3892ed0d02e49dda3974067958ba91b005f017" exitCode=0 Feb 03 12:28:54 crc kubenswrapper[4679]: I0203 12:28:54.968211 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" event={"ID":"0d9284bf-bc48-4115-af0d-c5a4db772cb4","Type":"ContainerDied","Data":"30e2a26710b5a1fd73bcd8655d3892ed0d02e49dda3974067958ba91b005f017"} Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.078752 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.184439 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327062 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327144 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65p7b\" (UniqueName: \"kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327222 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327253 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327290 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.327524 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0\") pod \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\" (UID: \"0d9284bf-bc48-4115-af0d-c5a4db772cb4\") " Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.337808 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b" (OuterVolumeSpecName: "kube-api-access-65p7b") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "kube-api-access-65p7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.386275 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.388122 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.394540 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.399380 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.409185 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config" (OuterVolumeSpecName: "config") pod "0d9284bf-bc48-4115-af0d-c5a4db772cb4" (UID: "0d9284bf-bc48-4115-af0d-c5a4db772cb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.429938 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.429975 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.429984 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65p7b\" (UniqueName: \"kubernetes.io/projected/0d9284bf-bc48-4115-af0d-c5a4db772cb4-kube-api-access-65p7b\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.429996 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.430004 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.430013 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d9284bf-bc48-4115-af0d-c5a4db772cb4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.547402 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-bgkzg"] Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.982177 4679 generic.go:334] "Generic (PLEG): container finished" podID="d3a4bf4d-7cf5-4026-acdf-53345ca82af1" containerID="0c0dd07db60afbcdb0dc41d25c391fd8caf90083cb00913cf0a4a5cdac96af33" exitCode=0 Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.982279 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" event={"ID":"d3a4bf4d-7cf5-4026-acdf-53345ca82af1","Type":"ContainerDied","Data":"0c0dd07db60afbcdb0dc41d25c391fd8caf90083cb00913cf0a4a5cdac96af33"} Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.982688 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" event={"ID":"d3a4bf4d-7cf5-4026-acdf-53345ca82af1","Type":"ContainerStarted","Data":"c42e472a91d7507eecdc936fad83032f0710322d0decf46f618bc5b0a72345a2"} Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.985732 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" event={"ID":"0d9284bf-bc48-4115-af0d-c5a4db772cb4","Type":"ContainerDied","Data":"d6ce5e5744e10b64c0814522e458460b160a5822f76ebd1909d55d4fd43b88cf"} Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.985783 4679 scope.go:117] "RemoveContainer" containerID="30e2a26710b5a1fd73bcd8655d3892ed0d02e49dda3974067958ba91b005f017" Feb 03 12:28:55 crc kubenswrapper[4679]: I0203 12:28:55.985926 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zvwqj" Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.150041 4679 scope.go:117] "RemoveContainer" containerID="5e4d901c8231372e73fc8279e4bd64753495730e28a8aa7bfe6c582ceeeacc86" Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.181271 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.190073 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zvwqj"] Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.238681 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" path="/var/lib/kubelet/pods/0d9284bf-bc48-4115-af0d-c5a4db772cb4/volumes" Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.998048 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" event={"ID":"d3a4bf4d-7cf5-4026-acdf-53345ca82af1","Type":"ContainerStarted","Data":"2570d073bdf27e1a90bf5eafab57d548a0b2dc69a576a16c10d936f13e65c92b"} Feb 03 12:28:56 crc kubenswrapper[4679]: I0203 12:28:56.999857 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:28:57 crc kubenswrapper[4679]: I0203 12:28:57.019935 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" podStartSLOduration=3.019912886 podStartE2EDuration="3.019912886s" podCreationTimestamp="2026-02-03 12:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:28:57.017984379 +0000 UTC m=+1409.492880467" watchObservedRunningTime="2026-02-03 12:28:57.019912886 +0000 UTC m=+1409.494808974" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.732197 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:02 crc kubenswrapper[4679]: E0203 12:29:02.733549 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="init" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.733567 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="init" Feb 03 12:29:02 crc kubenswrapper[4679]: E0203 12:29:02.733581 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.733587 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.733790 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9284bf-bc48-4115-af0d-c5a4db772cb4" containerName="dnsmasq-dns" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.735461 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.749084 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.913772 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkj8g\" (UniqueName: \"kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.914007 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:02 crc kubenswrapper[4679]: I0203 12:29:02.914044 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.015695 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkj8g\" (UniqueName: \"kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.015918 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.015953 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.016673 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.016704 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.038963 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkj8g\" (UniqueName: \"kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g\") pod \"redhat-operators-xvx9g\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.075641 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:03 crc kubenswrapper[4679]: I0203 12:29:03.578154 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:04 crc kubenswrapper[4679]: I0203 12:29:04.068313 4679 generic.go:334] "Generic (PLEG): container finished" podID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerID="d656e935db1553a6815cf00ee16926788fc082b9be1859e07e99a2c2f8a3a803" exitCode=0 Feb 03 12:29:04 crc kubenswrapper[4679]: I0203 12:29:04.068388 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerDied","Data":"d656e935db1553a6815cf00ee16926788fc082b9be1859e07e99a2c2f8a3a803"} Feb 03 12:29:04 crc kubenswrapper[4679]: I0203 12:29:04.068656 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerStarted","Data":"f215946f96bafca13c8ee832546ea444704032da8cf5f40d655cd43f83d022a1"} Feb 03 12:29:04 crc kubenswrapper[4679]: I0203 12:29:04.070536 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.080613 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-bgkzg" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.081038 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerStarted","Data":"b13117d1af787c5af931414496961aa2bb21b40cac4d1d94ef37d66542ebb974"} Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.191737 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.192273 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="dnsmasq-dns" containerID="cri-o://a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4" gracePeriod=10 Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.721637 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767124 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwqh7\" (UniqueName: \"kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767186 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767220 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767247 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767280 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767311 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.767386 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config\") pod \"9bd38952-8f83-42a1-9493-e1c12640c8bb\" (UID: \"9bd38952-8f83-42a1-9493-e1c12640c8bb\") " Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.792250 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7" (OuterVolumeSpecName: "kube-api-access-jwqh7") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "kube-api-access-jwqh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.827603 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.827712 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.845166 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.846391 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.854401 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.855302 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config" (OuterVolumeSpecName: "config") pod "9bd38952-8f83-42a1-9493-e1c12640c8bb" (UID: "9bd38952-8f83-42a1-9493-e1c12640c8bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868820 4679 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-config\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868864 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwqh7\" (UniqueName: \"kubernetes.io/projected/9bd38952-8f83-42a1-9493-e1c12640c8bb-kube-api-access-jwqh7\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868879 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868893 4679 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868903 4679 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868911 4679 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:05 crc kubenswrapper[4679]: I0203 12:29:05.868921 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9bd38952-8f83-42a1-9493-e1c12640c8bb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.093152 4679 generic.go:334] "Generic (PLEG): container finished" podID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerID="a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4" exitCode=0 Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.093227 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.093236 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" event={"ID":"9bd38952-8f83-42a1-9493-e1c12640c8bb","Type":"ContainerDied","Data":"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4"} Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.093373 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-wnhxv" event={"ID":"9bd38952-8f83-42a1-9493-e1c12640c8bb","Type":"ContainerDied","Data":"f25834db02291bb453918d023d6345d00b6c5586650c0198940ec78f34a3fb02"} Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.093388 4679 scope.go:117] "RemoveContainer" containerID="a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.130834 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.135440 4679 scope.go:117] "RemoveContainer" containerID="bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.142661 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-wnhxv"] Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.160798 4679 scope.go:117] "RemoveContainer" containerID="a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4" Feb 03 12:29:06 crc kubenswrapper[4679]: E0203 12:29:06.161431 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4\": container with ID starting with a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4 not found: ID does not exist" containerID="a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.161471 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4"} err="failed to get container status \"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4\": rpc error: code = NotFound desc = could not find container \"a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4\": container with ID starting with a66c9bbc9d3e9a983f54fa1eff3f68d0c25da7c1a7069589dd52c3a40579a6a4 not found: ID does not exist" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.161497 4679 scope.go:117] "RemoveContainer" containerID="bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc" Feb 03 12:29:06 crc kubenswrapper[4679]: E0203 12:29:06.164536 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc\": container with ID starting with bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc not found: ID does not exist" containerID="bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.164589 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc"} err="failed to get container status \"bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc\": rpc error: code = NotFound desc = could not find container \"bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc\": container with ID starting with bccf033cda93209660c69fc5db7532c16e64b8fc88de2f12d03a2ee92d519edc not found: ID does not exist" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.225517 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" path="/var/lib/kubelet/pods/9bd38952-8f83-42a1-9493-e1c12640c8bb/volumes" Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.737259 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:29:06 crc kubenswrapper[4679]: I0203 12:29:06.737695 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:29:07 crc kubenswrapper[4679]: I0203 12:29:07.108538 4679 generic.go:334] "Generic (PLEG): container finished" podID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerID="b13117d1af787c5af931414496961aa2bb21b40cac4d1d94ef37d66542ebb974" exitCode=0 Feb 03 12:29:07 crc kubenswrapper[4679]: I0203 12:29:07.108581 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerDied","Data":"b13117d1af787c5af931414496961aa2bb21b40cac4d1d94ef37d66542ebb974"} Feb 03 12:29:09 crc kubenswrapper[4679]: I0203 12:29:09.129347 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerStarted","Data":"24e8df4a0736dedfc1ed6e36603382f527c68486d642adeeb621f785f5451adb"} Feb 03 12:29:09 crc kubenswrapper[4679]: I0203 12:29:09.153448 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvx9g" podStartSLOduration=2.783749137 podStartE2EDuration="7.153428266s" podCreationTimestamp="2026-02-03 12:29:02 +0000 UTC" firstStartedPulling="2026-02-03 12:29:04.070084059 +0000 UTC m=+1416.544980147" lastFinishedPulling="2026-02-03 12:29:08.439763148 +0000 UTC m=+1420.914659276" observedRunningTime="2026-02-03 12:29:09.144974742 +0000 UTC m=+1421.619870840" watchObservedRunningTime="2026-02-03 12:29:09.153428266 +0000 UTC m=+1421.628324354" Feb 03 12:29:13 crc kubenswrapper[4679]: I0203 12:29:13.076808 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:13 crc kubenswrapper[4679]: I0203 12:29:13.077559 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:14 crc kubenswrapper[4679]: I0203 12:29:14.138565 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvx9g" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="registry-server" probeResult="failure" output=< Feb 03 12:29:14 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:29:14 crc kubenswrapper[4679]: > Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.472760 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s"] Feb 03 12:29:18 crc kubenswrapper[4679]: E0203 12:29:18.473514 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="dnsmasq-dns" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.473531 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="dnsmasq-dns" Feb 03 12:29:18 crc kubenswrapper[4679]: E0203 12:29:18.473551 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="init" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.473557 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="init" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.473761 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd38952-8f83-42a1-9493-e1c12640c8bb" containerName="dnsmasq-dns" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.474467 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.476567 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.476773 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.477913 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.478042 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.488324 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s"] Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.591338 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxkl\" (UniqueName: \"kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.591414 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.591459 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.591558 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.692783 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.693175 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.693293 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxkl\" (UniqueName: \"kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.693420 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.699915 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.700829 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.705081 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.723258 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxkl\" (UniqueName: \"kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:18 crc kubenswrapper[4679]: I0203 12:29:18.845645 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:19 crc kubenswrapper[4679]: I0203 12:29:19.460766 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s"] Feb 03 12:29:19 crc kubenswrapper[4679]: W0203 12:29:19.463792 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc50826_5b8c_4973_bef7_78e861d37c96.slice/crio-e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396 WatchSource:0}: Error finding container e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396: Status 404 returned error can't find the container with id e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396 Feb 03 12:29:20 crc kubenswrapper[4679]: I0203 12:29:20.224306 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" event={"ID":"6fc50826-5b8c-4973-bef7-78e861d37c96","Type":"ContainerStarted","Data":"e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396"} Feb 03 12:29:21 crc kubenswrapper[4679]: I0203 12:29:21.238631 4679 generic.go:334] "Generic (PLEG): container finished" podID="666e9640-9258-44a6-980d-e79d1dc7f2b3" containerID="9e293b5eed3e3d7e19a13e82a4d2522a3244c39c8fac1e0616fa806f110999a3" exitCode=0 Feb 03 12:29:21 crc kubenswrapper[4679]: I0203 12:29:21.238712 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"666e9640-9258-44a6-980d-e79d1dc7f2b3","Type":"ContainerDied","Data":"9e293b5eed3e3d7e19a13e82a4d2522a3244c39c8fac1e0616fa806f110999a3"} Feb 03 12:29:21 crc kubenswrapper[4679]: I0203 12:29:21.244123 4679 generic.go:334] "Generic (PLEG): container finished" podID="891b9bf5-a68a-4118-a002-3b74879fac0b" containerID="95957a4085630c45894c9bb2b2e44488fdd31ef8b5883e00ad78411c05ba0da8" exitCode=0 Feb 03 12:29:21 crc kubenswrapper[4679]: I0203 12:29:21.244496 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"891b9bf5-a68a-4118-a002-3b74879fac0b","Type":"ContainerDied","Data":"95957a4085630c45894c9bb2b2e44488fdd31ef8b5883e00ad78411c05ba0da8"} Feb 03 12:29:22 crc kubenswrapper[4679]: I0203 12:29:22.257293 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"666e9640-9258-44a6-980d-e79d1dc7f2b3","Type":"ContainerStarted","Data":"11762f2bacd20e3911024b2c98d730e3d432dbba9af5a75ea42d869da8adb400"} Feb 03 12:29:22 crc kubenswrapper[4679]: I0203 12:29:22.257797 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:29:22 crc kubenswrapper[4679]: I0203 12:29:22.260502 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"891b9bf5-a68a-4118-a002-3b74879fac0b","Type":"ContainerStarted","Data":"da763749507fe30111873d8c4999800ef24b716fbd6603b305adde021db9ea5b"} Feb 03 12:29:22 crc kubenswrapper[4679]: I0203 12:29:22.296965 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.296919182 podStartE2EDuration="36.296919182s" podCreationTimestamp="2026-02-03 12:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:22.291885353 +0000 UTC m=+1434.766781461" watchObservedRunningTime="2026-02-03 12:29:22.296919182 +0000 UTC m=+1434.771815270" Feb 03 12:29:23 crc kubenswrapper[4679]: I0203 12:29:23.143895 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:23 crc kubenswrapper[4679]: I0203 12:29:23.172478 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.172449317 podStartE2EDuration="38.172449317s" podCreationTimestamp="2026-02-03 12:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:29:22.331144116 +0000 UTC m=+1434.806040204" watchObservedRunningTime="2026-02-03 12:29:23.172449317 +0000 UTC m=+1435.647345405" Feb 03 12:29:23 crc kubenswrapper[4679]: I0203 12:29:23.201612 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:23 crc kubenswrapper[4679]: I0203 12:29:23.381884 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:24 crc kubenswrapper[4679]: I0203 12:29:24.289724 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvx9g" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="registry-server" containerID="cri-o://24e8df4a0736dedfc1ed6e36603382f527c68486d642adeeb621f785f5451adb" gracePeriod=2 Feb 03 12:29:25 crc kubenswrapper[4679]: I0203 12:29:25.301131 4679 generic.go:334] "Generic (PLEG): container finished" podID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerID="24e8df4a0736dedfc1ed6e36603382f527c68486d642adeeb621f785f5451adb" exitCode=0 Feb 03 12:29:25 crc kubenswrapper[4679]: I0203 12:29:25.301177 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerDied","Data":"24e8df4a0736dedfc1ed6e36603382f527c68486d642adeeb621f785f5451adb"} Feb 03 12:29:26 crc kubenswrapper[4679]: I0203 12:29:26.106549 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.102572 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.388007 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.389054 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvx9g" event={"ID":"f776df2c-88d3-499e-8f5d-cb9b49080d06","Type":"ContainerDied","Data":"f215946f96bafca13c8ee832546ea444704032da8cf5f40d655cd43f83d022a1"} Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.389193 4679 scope.go:117] "RemoveContainer" containerID="24e8df4a0736dedfc1ed6e36603382f527c68486d642adeeb621f785f5451adb" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.408298 4679 scope.go:117] "RemoveContainer" containerID="b13117d1af787c5af931414496961aa2bb21b40cac4d1d94ef37d66542ebb974" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.448203 4679 scope.go:117] "RemoveContainer" containerID="d656e935db1553a6815cf00ee16926788fc082b9be1859e07e99a2c2f8a3a803" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.454932 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content\") pod \"f776df2c-88d3-499e-8f5d-cb9b49080d06\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.455039 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkj8g\" (UniqueName: \"kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g\") pod \"f776df2c-88d3-499e-8f5d-cb9b49080d06\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.455107 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities\") pod \"f776df2c-88d3-499e-8f5d-cb9b49080d06\" (UID: \"f776df2c-88d3-499e-8f5d-cb9b49080d06\") " Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.456536 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities" (OuterVolumeSpecName: "utilities") pod "f776df2c-88d3-499e-8f5d-cb9b49080d06" (UID: "f776df2c-88d3-499e-8f5d-cb9b49080d06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.461413 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g" (OuterVolumeSpecName: "kube-api-access-mkj8g") pod "f776df2c-88d3-499e-8f5d-cb9b49080d06" (UID: "f776df2c-88d3-499e-8f5d-cb9b49080d06"). InnerVolumeSpecName "kube-api-access-mkj8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.553077 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f776df2c-88d3-499e-8f5d-cb9b49080d06" (UID: "f776df2c-88d3-499e-8f5d-cb9b49080d06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.557598 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkj8g\" (UniqueName: \"kubernetes.io/projected/f776df2c-88d3-499e-8f5d-cb9b49080d06-kube-api-access-mkj8g\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.557657 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:30 crc kubenswrapper[4679]: I0203 12:29:30.557674 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f776df2c-88d3-499e-8f5d-cb9b49080d06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:31 crc kubenswrapper[4679]: I0203 12:29:31.423299 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvx9g" Feb 03 12:29:31 crc kubenswrapper[4679]: I0203 12:29:31.427243 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" event={"ID":"6fc50826-5b8c-4973-bef7-78e861d37c96","Type":"ContainerStarted","Data":"bf2d3719b6add197078116420649ec742f97b8da5b2ba53b1ed3a50c62b3e4f1"} Feb 03 12:29:31 crc kubenswrapper[4679]: I0203 12:29:31.444652 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" podStartSLOduration=2.810845378 podStartE2EDuration="13.444631772s" podCreationTimestamp="2026-02-03 12:29:18 +0000 UTC" firstStartedPulling="2026-02-03 12:29:19.465989051 +0000 UTC m=+1431.940885139" lastFinishedPulling="2026-02-03 12:29:30.099775445 +0000 UTC m=+1442.574671533" observedRunningTime="2026-02-03 12:29:31.441741879 +0000 UTC m=+1443.916637977" watchObservedRunningTime="2026-02-03 12:29:31.444631772 +0000 UTC m=+1443.919527860" Feb 03 12:29:31 crc kubenswrapper[4679]: I0203 12:29:31.474038 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:31 crc kubenswrapper[4679]: I0203 12:29:31.480134 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvx9g"] Feb 03 12:29:32 crc kubenswrapper[4679]: I0203 12:29:32.234552 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" path="/var/lib/kubelet/pods/f776df2c-88d3-499e-8f5d-cb9b49080d06/volumes" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.110679 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.563555 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.735753 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.735806 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.735852 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.736583 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:29:36 crc kubenswrapper[4679]: I0203 12:29:36.736641 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60" gracePeriod=600 Feb 03 12:29:37 crc kubenswrapper[4679]: I0203 12:29:37.486468 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60" exitCode=0 Feb 03 12:29:37 crc kubenswrapper[4679]: I0203 12:29:37.486544 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60"} Feb 03 12:29:37 crc kubenswrapper[4679]: I0203 12:29:37.487746 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b"} Feb 03 12:29:37 crc kubenswrapper[4679]: I0203 12:29:37.487784 4679 scope.go:117] "RemoveContainer" containerID="c61e86b798113e11bc0b821a5c8a3fd559823d817a33888c3615c45ebf2d2b95" Feb 03 12:29:44 crc kubenswrapper[4679]: I0203 12:29:44.554910 4679 generic.go:334] "Generic (PLEG): container finished" podID="6fc50826-5b8c-4973-bef7-78e861d37c96" containerID="bf2d3719b6add197078116420649ec742f97b8da5b2ba53b1ed3a50c62b3e4f1" exitCode=0 Feb 03 12:29:44 crc kubenswrapper[4679]: I0203 12:29:44.555025 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" event={"ID":"6fc50826-5b8c-4973-bef7-78e861d37c96","Type":"ContainerDied","Data":"bf2d3719b6add197078116420649ec742f97b8da5b2ba53b1ed3a50c62b3e4f1"} Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.033235 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.166211 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fxkl\" (UniqueName: \"kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl\") pod \"6fc50826-5b8c-4973-bef7-78e861d37c96\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.167151 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam\") pod \"6fc50826-5b8c-4973-bef7-78e861d37c96\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.167743 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory\") pod \"6fc50826-5b8c-4973-bef7-78e861d37c96\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.167800 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle\") pod \"6fc50826-5b8c-4973-bef7-78e861d37c96\" (UID: \"6fc50826-5b8c-4973-bef7-78e861d37c96\") " Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.172453 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6fc50826-5b8c-4973-bef7-78e861d37c96" (UID: "6fc50826-5b8c-4973-bef7-78e861d37c96"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.172498 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl" (OuterVolumeSpecName: "kube-api-access-2fxkl") pod "6fc50826-5b8c-4973-bef7-78e861d37c96" (UID: "6fc50826-5b8c-4973-bef7-78e861d37c96"). InnerVolumeSpecName "kube-api-access-2fxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.194169 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory" (OuterVolumeSpecName: "inventory") pod "6fc50826-5b8c-4973-bef7-78e861d37c96" (UID: "6fc50826-5b8c-4973-bef7-78e861d37c96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.196115 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fc50826-5b8c-4973-bef7-78e861d37c96" (UID: "6fc50826-5b8c-4973-bef7-78e861d37c96"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.269586 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.269628 4679 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.269642 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fxkl\" (UniqueName: \"kubernetes.io/projected/6fc50826-5b8c-4973-bef7-78e861d37c96-kube-api-access-2fxkl\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.269655 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fc50826-5b8c-4973-bef7-78e861d37c96-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.593092 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" event={"ID":"6fc50826-5b8c-4973-bef7-78e861d37c96","Type":"ContainerDied","Data":"e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396"} Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.593804 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8fc9e647cb23b61eb2d7037393cee8f8dd98803fdd130bcd9c0587481adf396" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.593209 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.729385 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht"] Feb 03 12:29:46 crc kubenswrapper[4679]: E0203 12:29:46.729919 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="registry-server" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.729967 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="registry-server" Feb 03 12:29:46 crc kubenswrapper[4679]: E0203 12:29:46.729994 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="extract-utilities" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.730001 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="extract-utilities" Feb 03 12:29:46 crc kubenswrapper[4679]: E0203 12:29:46.730014 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc50826-5b8c-4973-bef7-78e861d37c96" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.730021 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc50826-5b8c-4973-bef7-78e861d37c96" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:46 crc kubenswrapper[4679]: E0203 12:29:46.730048 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="extract-content" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.730054 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="extract-content" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.730414 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="f776df2c-88d3-499e-8f5d-cb9b49080d06" containerName="registry-server" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.730431 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc50826-5b8c-4973-bef7-78e861d37c96" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.731158 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.733827 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.733944 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.735403 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.735545 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.745040 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht"] Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.880584 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bkw\" (UniqueName: \"kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.880717 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.880858 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.982658 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.982775 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bkw\" (UniqueName: \"kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.982818 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.989045 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:46 crc kubenswrapper[4679]: I0203 12:29:46.989073 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:47 crc kubenswrapper[4679]: I0203 12:29:47.014497 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bkw\" (UniqueName: \"kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5bzht\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:47 crc kubenswrapper[4679]: I0203 12:29:47.047486 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:47 crc kubenswrapper[4679]: I0203 12:29:47.590094 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht"] Feb 03 12:29:47 crc kubenswrapper[4679]: I0203 12:29:47.606763 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" event={"ID":"b709c6fe-9a41-44fd-9350-989aa43947da","Type":"ContainerStarted","Data":"07811ff0f04ea804084378afc9af5b40a1a350261a155f91665aba0d92456cfb"} Feb 03 12:29:48 crc kubenswrapper[4679]: I0203 12:29:48.619109 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" event={"ID":"b709c6fe-9a41-44fd-9350-989aa43947da","Type":"ContainerStarted","Data":"4902a2fd9df713c2c41ee0420224be54f25dfe5a0874abca1de97f9f03194287"} Feb 03 12:29:48 crc kubenswrapper[4679]: I0203 12:29:48.641197 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" podStartSLOduration=2.226621012 podStartE2EDuration="2.641178875s" podCreationTimestamp="2026-02-03 12:29:46 +0000 UTC" firstStartedPulling="2026-02-03 12:29:47.592282265 +0000 UTC m=+1460.067178353" lastFinishedPulling="2026-02-03 12:29:48.006840118 +0000 UTC m=+1460.481736216" observedRunningTime="2026-02-03 12:29:48.640106972 +0000 UTC m=+1461.115003060" watchObservedRunningTime="2026-02-03 12:29:48.641178875 +0000 UTC m=+1461.116074963" Feb 03 12:29:51 crc kubenswrapper[4679]: I0203 12:29:51.658265 4679 generic.go:334] "Generic (PLEG): container finished" podID="b709c6fe-9a41-44fd-9350-989aa43947da" containerID="4902a2fd9df713c2c41ee0420224be54f25dfe5a0874abca1de97f9f03194287" exitCode=0 Feb 03 12:29:51 crc kubenswrapper[4679]: I0203 12:29:51.658341 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" event={"ID":"b709c6fe-9a41-44fd-9350-989aa43947da","Type":"ContainerDied","Data":"4902a2fd9df713c2c41ee0420224be54f25dfe5a0874abca1de97f9f03194287"} Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.106071 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.131277 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam\") pod \"b709c6fe-9a41-44fd-9350-989aa43947da\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.131583 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5bkw\" (UniqueName: \"kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw\") pod \"b709c6fe-9a41-44fd-9350-989aa43947da\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.175743 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw" (OuterVolumeSpecName: "kube-api-access-l5bkw") pod "b709c6fe-9a41-44fd-9350-989aa43947da" (UID: "b709c6fe-9a41-44fd-9350-989aa43947da"). InnerVolumeSpecName "kube-api-access-l5bkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.198679 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b709c6fe-9a41-44fd-9350-989aa43947da" (UID: "b709c6fe-9a41-44fd-9350-989aa43947da"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.233576 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory\") pod \"b709c6fe-9a41-44fd-9350-989aa43947da\" (UID: \"b709c6fe-9a41-44fd-9350-989aa43947da\") " Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.234409 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5bkw\" (UniqueName: \"kubernetes.io/projected/b709c6fe-9a41-44fd-9350-989aa43947da-kube-api-access-l5bkw\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.234434 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.262179 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory" (OuterVolumeSpecName: "inventory") pod "b709c6fe-9a41-44fd-9350-989aa43947da" (UID: "b709c6fe-9a41-44fd-9350-989aa43947da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.338520 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b709c6fe-9a41-44fd-9350-989aa43947da-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.678324 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" event={"ID":"b709c6fe-9a41-44fd-9350-989aa43947da","Type":"ContainerDied","Data":"07811ff0f04ea804084378afc9af5b40a1a350261a155f91665aba0d92456cfb"} Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.678390 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07811ff0f04ea804084378afc9af5b40a1a350261a155f91665aba0d92456cfb" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.678448 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5bzht" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.764130 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w"] Feb 03 12:29:53 crc kubenswrapper[4679]: E0203 12:29:53.764747 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b709c6fe-9a41-44fd-9350-989aa43947da" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.764773 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b709c6fe-9a41-44fd-9350-989aa43947da" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.765176 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b709c6fe-9a41-44fd-9350-989aa43947da" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.767849 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.770447 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.770556 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.770666 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.772254 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.774242 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w"] Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.847587 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqpn\" (UniqueName: \"kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.847650 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.848011 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.848219 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.950135 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.950230 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqpn\" (UniqueName: \"kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.950268 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.950381 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.957003 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.957157 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.957298 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:53 crc kubenswrapper[4679]: I0203 12:29:53.967877 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqpn\" (UniqueName: \"kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:54 crc kubenswrapper[4679]: I0203 12:29:54.093128 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:29:55 crc kubenswrapper[4679]: I0203 12:29:55.453824 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w"] Feb 03 12:29:56 crc kubenswrapper[4679]: I0203 12:29:56.341192 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" event={"ID":"83eaca34-8d94-48a8-8e56-58db37e376ab","Type":"ContainerStarted","Data":"be4ca8f40be5212bc39dbe958487f9fcf70d9f411b02af85e6b97519a798fb2d"} Feb 03 12:29:56 crc kubenswrapper[4679]: I0203 12:29:56.341791 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" event={"ID":"83eaca34-8d94-48a8-8e56-58db37e376ab","Type":"ContainerStarted","Data":"f8130f1beeeb7f1a6e2825fb5128affff467a8c040ed5534889dc711bd82e096"} Feb 03 12:29:56 crc kubenswrapper[4679]: I0203 12:29:56.363996 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" podStartSLOduration=2.834159861 podStartE2EDuration="3.363967637s" podCreationTimestamp="2026-02-03 12:29:53 +0000 UTC" firstStartedPulling="2026-02-03 12:29:55.463250336 +0000 UTC m=+1467.938146414" lastFinishedPulling="2026-02-03 12:29:55.993058112 +0000 UTC m=+1468.467954190" observedRunningTime="2026-02-03 12:29:56.357305772 +0000 UTC m=+1468.832201880" watchObservedRunningTime="2026-02-03 12:29:56.363967637 +0000 UTC m=+1468.838863725" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.146273 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n"] Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.148512 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.151009 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.151398 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.157899 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n"] Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.286506 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltczw\" (UniqueName: \"kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.286590 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.286627 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.388667 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltczw\" (UniqueName: \"kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.388736 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.388801 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.390591 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.395558 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.409060 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltczw\" (UniqueName: \"kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw\") pod \"collect-profiles-29502030-8v67n\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.476914 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:00 crc kubenswrapper[4679]: W0203 12:30:00.959439 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7056a891_e884_4966_bbd1_8b22706082f1.slice/crio-003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834 WatchSource:0}: Error finding container 003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834: Status 404 returned error can't find the container with id 003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834 Feb 03 12:30:00 crc kubenswrapper[4679]: I0203 12:30:00.977247 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n"] Feb 03 12:30:01 crc kubenswrapper[4679]: I0203 12:30:01.389680 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" event={"ID":"7056a891-e884-4966-bbd1-8b22706082f1","Type":"ContainerStarted","Data":"40cabf5de210551ac48649714025c28c3117f49881f770191c7d12dada34eb93"} Feb 03 12:30:01 crc kubenswrapper[4679]: I0203 12:30:01.389726 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" event={"ID":"7056a891-e884-4966-bbd1-8b22706082f1","Type":"ContainerStarted","Data":"003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834"} Feb 03 12:30:01 crc kubenswrapper[4679]: I0203 12:30:01.413708 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" podStartSLOduration=1.413686497 podStartE2EDuration="1.413686497s" podCreationTimestamp="2026-02-03 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 12:30:01.402624766 +0000 UTC m=+1473.877520854" watchObservedRunningTime="2026-02-03 12:30:01.413686497 +0000 UTC m=+1473.888582575" Feb 03 12:30:02 crc kubenswrapper[4679]: I0203 12:30:02.404699 4679 generic.go:334] "Generic (PLEG): container finished" podID="7056a891-e884-4966-bbd1-8b22706082f1" containerID="40cabf5de210551ac48649714025c28c3117f49881f770191c7d12dada34eb93" exitCode=0 Feb 03 12:30:02 crc kubenswrapper[4679]: I0203 12:30:02.404757 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" event={"ID":"7056a891-e884-4966-bbd1-8b22706082f1","Type":"ContainerDied","Data":"40cabf5de210551ac48649714025c28c3117f49881f770191c7d12dada34eb93"} Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.764282 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.960440 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume\") pod \"7056a891-e884-4966-bbd1-8b22706082f1\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.960597 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume\") pod \"7056a891-e884-4966-bbd1-8b22706082f1\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.960742 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltczw\" (UniqueName: \"kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw\") pod \"7056a891-e884-4966-bbd1-8b22706082f1\" (UID: \"7056a891-e884-4966-bbd1-8b22706082f1\") " Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.962940 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "7056a891-e884-4966-bbd1-8b22706082f1" (UID: "7056a891-e884-4966-bbd1-8b22706082f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.969800 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw" (OuterVolumeSpecName: "kube-api-access-ltczw") pod "7056a891-e884-4966-bbd1-8b22706082f1" (UID: "7056a891-e884-4966-bbd1-8b22706082f1"). InnerVolumeSpecName "kube-api-access-ltczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:30:03 crc kubenswrapper[4679]: I0203 12:30:03.970156 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7056a891-e884-4966-bbd1-8b22706082f1" (UID: "7056a891-e884-4966-bbd1-8b22706082f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.063646 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltczw\" (UniqueName: \"kubernetes.io/projected/7056a891-e884-4966-bbd1-8b22706082f1-kube-api-access-ltczw\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.063720 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7056a891-e884-4966-bbd1-8b22706082f1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.063743 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7056a891-e884-4966-bbd1-8b22706082f1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.428993 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" event={"ID":"7056a891-e884-4966-bbd1-8b22706082f1","Type":"ContainerDied","Data":"003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834"} Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.429158 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003420ee844e8f75b371245574e2f83540ed324fc121918daddac120e6246834" Feb 03 12:30:04 crc kubenswrapper[4679]: I0203 12:30:04.430665 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n" Feb 03 12:30:32 crc kubenswrapper[4679]: I0203 12:30:32.333155 4679 scope.go:117] "RemoveContainer" containerID="6b076044091aa23937d7b37ecc22539c78017a31fd0632f4be7bd60bd09ad25c" Feb 03 12:30:32 crc kubenswrapper[4679]: I0203 12:30:32.388694 4679 scope.go:117] "RemoveContainer" containerID="05746132873d2074ba0b5f422e47693c5787b44c71f8e9c08fb30a223f176170" Feb 03 12:31:32 crc kubenswrapper[4679]: I0203 12:31:32.521701 4679 scope.go:117] "RemoveContainer" containerID="04476df0de14bbd6a3118da537a3ee9c0ae29b106e9bd50997e2ea626b1af3be" Feb 03 12:31:32 crc kubenswrapper[4679]: I0203 12:31:32.545951 4679 scope.go:117] "RemoveContainer" containerID="749a751a89e0ab62f219c86114e5872241def892a351f8505e45038cf3e6db70" Feb 03 12:31:32 crc kubenswrapper[4679]: I0203 12:31:32.606267 4679 scope.go:117] "RemoveContainer" containerID="3368218516df28db90be66e671fee30820f616eda539a01a0ffb8cfe414952e4" Feb 03 12:31:35 crc kubenswrapper[4679]: I0203 12:31:35.986491 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:35 crc kubenswrapper[4679]: E0203 12:31:35.987806 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7056a891-e884-4966-bbd1-8b22706082f1" containerName="collect-profiles" Feb 03 12:31:35 crc kubenswrapper[4679]: I0203 12:31:35.987825 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7056a891-e884-4966-bbd1-8b22706082f1" containerName="collect-profiles" Feb 03 12:31:35 crc kubenswrapper[4679]: I0203 12:31:35.988071 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7056a891-e884-4966-bbd1-8b22706082f1" containerName="collect-profiles" Feb 03 12:31:35 crc kubenswrapper[4679]: I0203 12:31:35.990199 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.014209 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.127272 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8pp\" (UniqueName: \"kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.127610 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.127710 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.230285 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8pp\" (UniqueName: \"kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.230575 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.230617 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.231117 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.231143 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.263998 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8pp\" (UniqueName: \"kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp\") pod \"community-operators-s2h2v\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.336097 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:36 crc kubenswrapper[4679]: I0203 12:31:36.877298 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:37 crc kubenswrapper[4679]: I0203 12:31:37.438473 4679 generic.go:334] "Generic (PLEG): container finished" podID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerID="505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac" exitCode=0 Feb 03 12:31:37 crc kubenswrapper[4679]: I0203 12:31:37.438798 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerDied","Data":"505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac"} Feb 03 12:31:37 crc kubenswrapper[4679]: I0203 12:31:37.438831 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerStarted","Data":"4ebb1fefc08a5069651ee350b3ecef8a6e0f93b84b51c8a25edbf571fdc83263"} Feb 03 12:31:38 crc kubenswrapper[4679]: I0203 12:31:38.452405 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerStarted","Data":"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f"} Feb 03 12:31:39 crc kubenswrapper[4679]: I0203 12:31:39.469190 4679 generic.go:334] "Generic (PLEG): container finished" podID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerID="a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f" exitCode=0 Feb 03 12:31:39 crc kubenswrapper[4679]: I0203 12:31:39.469461 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerDied","Data":"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f"} Feb 03 12:31:40 crc kubenswrapper[4679]: I0203 12:31:40.481763 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerStarted","Data":"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d"} Feb 03 12:31:40 crc kubenswrapper[4679]: I0203 12:31:40.504972 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2h2v" podStartSLOduration=3.00240763 podStartE2EDuration="5.504953991s" podCreationTimestamp="2026-02-03 12:31:35 +0000 UTC" firstStartedPulling="2026-02-03 12:31:37.441769165 +0000 UTC m=+1569.916665253" lastFinishedPulling="2026-02-03 12:31:39.944315496 +0000 UTC m=+1572.419211614" observedRunningTime="2026-02-03 12:31:40.501891763 +0000 UTC m=+1572.976787851" watchObservedRunningTime="2026-02-03 12:31:40.504953991 +0000 UTC m=+1572.979850079" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.372531 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.377040 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.395997 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.485066 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbl6v\" (UniqueName: \"kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.485137 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.485466 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.587171 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.587339 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbl6v\" (UniqueName: \"kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.587392 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.588025 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.588138 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.612804 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbl6v\" (UniqueName: \"kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v\") pod \"redhat-marketplace-8nbn7\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:43 crc kubenswrapper[4679]: I0203 12:31:43.699910 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:44 crc kubenswrapper[4679]: I0203 12:31:44.170456 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:44 crc kubenswrapper[4679]: I0203 12:31:44.544908 4679 generic.go:334] "Generic (PLEG): container finished" podID="00f66aa3-544e-41c0-8771-62903e57221c" containerID="11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f" exitCode=0 Feb 03 12:31:44 crc kubenswrapper[4679]: I0203 12:31:44.545014 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerDied","Data":"11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f"} Feb 03 12:31:44 crc kubenswrapper[4679]: I0203 12:31:44.545422 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerStarted","Data":"f0cf99c0dc929bd3ce85c9e49a68ac1191cf82738c8324c10bf105345873dc02"} Feb 03 12:31:45 crc kubenswrapper[4679]: I0203 12:31:45.558272 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerStarted","Data":"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a"} Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.336617 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.338265 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.421663 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.570737 4679 generic.go:334] "Generic (PLEG): container finished" podID="00f66aa3-544e-41c0-8771-62903e57221c" containerID="5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a" exitCode=0 Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.571781 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerDied","Data":"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a"} Feb 03 12:31:46 crc kubenswrapper[4679]: I0203 12:31:46.631486 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:47 crc kubenswrapper[4679]: I0203 12:31:47.582411 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerStarted","Data":"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d"} Feb 03 12:31:47 crc kubenswrapper[4679]: I0203 12:31:47.611323 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8nbn7" podStartSLOduration=2.162994476 podStartE2EDuration="4.61128484s" podCreationTimestamp="2026-02-03 12:31:43 +0000 UTC" firstStartedPulling="2026-02-03 12:31:44.548281208 +0000 UTC m=+1577.023177296" lastFinishedPulling="2026-02-03 12:31:46.996571572 +0000 UTC m=+1579.471467660" observedRunningTime="2026-02-03 12:31:47.602581787 +0000 UTC m=+1580.077477885" watchObservedRunningTime="2026-02-03 12:31:47.61128484 +0000 UTC m=+1580.086180968" Feb 03 12:31:48 crc kubenswrapper[4679]: I0203 12:31:48.766540 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:49 crc kubenswrapper[4679]: I0203 12:31:49.601165 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2h2v" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="registry-server" containerID="cri-o://e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d" gracePeriod=2 Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.200440 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.337629 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw8pp\" (UniqueName: \"kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp\") pod \"8d01f6cf-05bd-412e-83a9-8b0325c48921\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.337867 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content\") pod \"8d01f6cf-05bd-412e-83a9-8b0325c48921\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.338901 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities\") pod \"8d01f6cf-05bd-412e-83a9-8b0325c48921\" (UID: \"8d01f6cf-05bd-412e-83a9-8b0325c48921\") " Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.339957 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities" (OuterVolumeSpecName: "utilities") pod "8d01f6cf-05bd-412e-83a9-8b0325c48921" (UID: "8d01f6cf-05bd-412e-83a9-8b0325c48921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.343247 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.343291 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp" (OuterVolumeSpecName: "kube-api-access-rw8pp") pod "8d01f6cf-05bd-412e-83a9-8b0325c48921" (UID: "8d01f6cf-05bd-412e-83a9-8b0325c48921"). InnerVolumeSpecName "kube-api-access-rw8pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.436925 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d01f6cf-05bd-412e-83a9-8b0325c48921" (UID: "8d01f6cf-05bd-412e-83a9-8b0325c48921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.446168 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw8pp\" (UniqueName: \"kubernetes.io/projected/8d01f6cf-05bd-412e-83a9-8b0325c48921-kube-api-access-rw8pp\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.446313 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d01f6cf-05bd-412e-83a9-8b0325c48921-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.616151 4679 generic.go:334] "Generic (PLEG): container finished" podID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerID="e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d" exitCode=0 Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.616209 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerDied","Data":"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d"} Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.616255 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2h2v" event={"ID":"8d01f6cf-05bd-412e-83a9-8b0325c48921","Type":"ContainerDied","Data":"4ebb1fefc08a5069651ee350b3ecef8a6e0f93b84b51c8a25edbf571fdc83263"} Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.616268 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2h2v" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.616284 4679 scope.go:117] "RemoveContainer" containerID="e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.653058 4679 scope.go:117] "RemoveContainer" containerID="a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.677147 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.683038 4679 scope.go:117] "RemoveContainer" containerID="505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.693125 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2h2v"] Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.749787 4679 scope.go:117] "RemoveContainer" containerID="e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d" Feb 03 12:31:50 crc kubenswrapper[4679]: E0203 12:31:50.750319 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d\": container with ID starting with e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d not found: ID does not exist" containerID="e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.750402 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d"} err="failed to get container status \"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d\": rpc error: code = NotFound desc = could not find container \"e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d\": container with ID starting with e86884b33d7f77030c72e1413badd2ad82364efa940c0920d4f83234c2ec785d not found: ID does not exist" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.750443 4679 scope.go:117] "RemoveContainer" containerID="a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f" Feb 03 12:31:50 crc kubenswrapper[4679]: E0203 12:31:50.751042 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f\": container with ID starting with a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f not found: ID does not exist" containerID="a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.751071 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f"} err="failed to get container status \"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f\": rpc error: code = NotFound desc = could not find container \"a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f\": container with ID starting with a8a6ad18238a4273017b9b5c8444512f92576ee7285263c379e71af12ef0599f not found: ID does not exist" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.751091 4679 scope.go:117] "RemoveContainer" containerID="505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac" Feb 03 12:31:50 crc kubenswrapper[4679]: E0203 12:31:50.751489 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac\": container with ID starting with 505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac not found: ID does not exist" containerID="505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac" Feb 03 12:31:50 crc kubenswrapper[4679]: I0203 12:31:50.751655 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac"} err="failed to get container status \"505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac\": rpc error: code = NotFound desc = could not find container \"505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac\": container with ID starting with 505bc80f93891792f59444d79a74abbb38ae8e934b489ba1892c7fb44ff70cac not found: ID does not exist" Feb 03 12:31:52 crc kubenswrapper[4679]: I0203 12:31:52.230237 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" path="/var/lib/kubelet/pods/8d01f6cf-05bd-412e-83a9-8b0325c48921/volumes" Feb 03 12:31:53 crc kubenswrapper[4679]: I0203 12:31:53.700556 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:53 crc kubenswrapper[4679]: I0203 12:31:53.701088 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:53 crc kubenswrapper[4679]: I0203 12:31:53.769508 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:54 crc kubenswrapper[4679]: I0203 12:31:54.754079 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:54 crc kubenswrapper[4679]: I0203 12:31:54.814355 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:56 crc kubenswrapper[4679]: I0203 12:31:56.698092 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8nbn7" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="registry-server" containerID="cri-o://c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d" gracePeriod=2 Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.220880 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.369042 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbl6v\" (UniqueName: \"kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v\") pod \"00f66aa3-544e-41c0-8771-62903e57221c\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.369229 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities\") pod \"00f66aa3-544e-41c0-8771-62903e57221c\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.369284 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content\") pod \"00f66aa3-544e-41c0-8771-62903e57221c\" (UID: \"00f66aa3-544e-41c0-8771-62903e57221c\") " Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.370802 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities" (OuterVolumeSpecName: "utilities") pod "00f66aa3-544e-41c0-8771-62903e57221c" (UID: "00f66aa3-544e-41c0-8771-62903e57221c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.377198 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v" (OuterVolumeSpecName: "kube-api-access-cbl6v") pod "00f66aa3-544e-41c0-8771-62903e57221c" (UID: "00f66aa3-544e-41c0-8771-62903e57221c"). InnerVolumeSpecName "kube-api-access-cbl6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.395730 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f66aa3-544e-41c0-8771-62903e57221c" (UID: "00f66aa3-544e-41c0-8771-62903e57221c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.472479 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbl6v\" (UniqueName: \"kubernetes.io/projected/00f66aa3-544e-41c0-8771-62903e57221c-kube-api-access-cbl6v\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.472554 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.472565 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f66aa3-544e-41c0-8771-62903e57221c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.707475 4679 generic.go:334] "Generic (PLEG): container finished" podID="00f66aa3-544e-41c0-8771-62903e57221c" containerID="c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d" exitCode=0 Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.707516 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerDied","Data":"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d"} Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.707543 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nbn7" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.707559 4679 scope.go:117] "RemoveContainer" containerID="c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.707549 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nbn7" event={"ID":"00f66aa3-544e-41c0-8771-62903e57221c","Type":"ContainerDied","Data":"f0cf99c0dc929bd3ce85c9e49a68ac1191cf82738c8324c10bf105345873dc02"} Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.741781 4679 scope.go:117] "RemoveContainer" containerID="5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.746223 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.774196 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nbn7"] Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.778607 4679 scope.go:117] "RemoveContainer" containerID="11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.826432 4679 scope.go:117] "RemoveContainer" containerID="c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d" Feb 03 12:31:57 crc kubenswrapper[4679]: E0203 12:31:57.827117 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d\": container with ID starting with c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d not found: ID does not exist" containerID="c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.827147 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d"} err="failed to get container status \"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d\": rpc error: code = NotFound desc = could not find container \"c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d\": container with ID starting with c7a97a2f91401b8b8fd248d08fabb594d32e0d2bb4fd0b9b20854aaf36be6c1d not found: ID does not exist" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.827171 4679 scope.go:117] "RemoveContainer" containerID="5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a" Feb 03 12:31:57 crc kubenswrapper[4679]: E0203 12:31:57.827459 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a\": container with ID starting with 5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a not found: ID does not exist" containerID="5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.827487 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a"} err="failed to get container status \"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a\": rpc error: code = NotFound desc = could not find container \"5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a\": container with ID starting with 5869772127d03a048373a3f9d607b5161fe3410a246ee220383c6dca3d01c47a not found: ID does not exist" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.827501 4679 scope.go:117] "RemoveContainer" containerID="11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f" Feb 03 12:31:57 crc kubenswrapper[4679]: E0203 12:31:57.827767 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f\": container with ID starting with 11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f not found: ID does not exist" containerID="11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f" Feb 03 12:31:57 crc kubenswrapper[4679]: I0203 12:31:57.827797 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f"} err="failed to get container status \"11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f\": rpc error: code = NotFound desc = could not find container \"11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f\": container with ID starting with 11991369e47f0db05536833a858b774be2e75393e64ddcdcc20c90ad050ab33f not found: ID does not exist" Feb 03 12:31:58 crc kubenswrapper[4679]: I0203 12:31:58.249774 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f66aa3-544e-41c0-8771-62903e57221c" path="/var/lib/kubelet/pods/00f66aa3-544e-41c0-8771-62903e57221c/volumes" Feb 03 12:32:06 crc kubenswrapper[4679]: I0203 12:32:06.736422 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:32:06 crc kubenswrapper[4679]: I0203 12:32:06.739047 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:32:32 crc kubenswrapper[4679]: I0203 12:32:32.704236 4679 scope.go:117] "RemoveContainer" containerID="d125ca68c1b41f8a3e7cec808f9671b075ae7c419ea999fc8c34604964525143" Feb 03 12:32:32 crc kubenswrapper[4679]: I0203 12:32:32.732884 4679 scope.go:117] "RemoveContainer" containerID="3f2e5de3c76b494383c477d2712a4bd28c70ae0c518a0fc7e645aa099fcecb8e" Feb 03 12:32:32 crc kubenswrapper[4679]: I0203 12:32:32.756493 4679 scope.go:117] "RemoveContainer" containerID="d5562ced629fb4395f0c3713b3687397a23931c3b43470db26fa4057d82e6b49" Feb 03 12:32:32 crc kubenswrapper[4679]: I0203 12:32:32.782271 4679 scope.go:117] "RemoveContainer" containerID="ef333dfd8ffb3353933bc6f48b449bb91a9b778fc7dbdabd3b777b6b3ac6b958" Feb 03 12:32:36 crc kubenswrapper[4679]: I0203 12:32:36.735269 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:32:36 crc kubenswrapper[4679]: I0203 12:32:36.735764 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:33:06 crc kubenswrapper[4679]: I0203 12:33:06.736150 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:33:06 crc kubenswrapper[4679]: I0203 12:33:06.736902 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:33:06 crc kubenswrapper[4679]: I0203 12:33:06.737013 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:33:06 crc kubenswrapper[4679]: I0203 12:33:06.737853 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:33:06 crc kubenswrapper[4679]: I0203 12:33:06.737918 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" gracePeriod=600 Feb 03 12:33:06 crc kubenswrapper[4679]: E0203 12:33:06.895318 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:33:07 crc kubenswrapper[4679]: I0203 12:33:07.447706 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" exitCode=0 Feb 03 12:33:07 crc kubenswrapper[4679]: I0203 12:33:07.447753 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b"} Feb 03 12:33:07 crc kubenswrapper[4679]: I0203 12:33:07.447796 4679 scope.go:117] "RemoveContainer" containerID="b1900a15438f6b44d35273c5044d3b6a00c1d9eb0c447a4d0cab3da818bdee60" Feb 03 12:33:07 crc kubenswrapper[4679]: I0203 12:33:07.448620 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:33:07 crc kubenswrapper[4679]: E0203 12:33:07.448886 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:33:14 crc kubenswrapper[4679]: I0203 12:33:14.521638 4679 generic.go:334] "Generic (PLEG): container finished" podID="83eaca34-8d94-48a8-8e56-58db37e376ab" containerID="be4ca8f40be5212bc39dbe958487f9fcf70d9f411b02af85e6b97519a798fb2d" exitCode=0 Feb 03 12:33:14 crc kubenswrapper[4679]: I0203 12:33:14.521719 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" event={"ID":"83eaca34-8d94-48a8-8e56-58db37e376ab","Type":"ContainerDied","Data":"be4ca8f40be5212bc39dbe958487f9fcf70d9f411b02af85e6b97519a798fb2d"} Feb 03 12:33:15 crc kubenswrapper[4679]: I0203 12:33:15.962054 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.133155 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvqpn\" (UniqueName: \"kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn\") pod \"83eaca34-8d94-48a8-8e56-58db37e376ab\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.133205 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory\") pod \"83eaca34-8d94-48a8-8e56-58db37e376ab\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.133273 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam\") pod \"83eaca34-8d94-48a8-8e56-58db37e376ab\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.133318 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle\") pod \"83eaca34-8d94-48a8-8e56-58db37e376ab\" (UID: \"83eaca34-8d94-48a8-8e56-58db37e376ab\") " Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.140288 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "83eaca34-8d94-48a8-8e56-58db37e376ab" (UID: "83eaca34-8d94-48a8-8e56-58db37e376ab"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.144864 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn" (OuterVolumeSpecName: "kube-api-access-mvqpn") pod "83eaca34-8d94-48a8-8e56-58db37e376ab" (UID: "83eaca34-8d94-48a8-8e56-58db37e376ab"). InnerVolumeSpecName "kube-api-access-mvqpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.169502 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory" (OuterVolumeSpecName: "inventory") pod "83eaca34-8d94-48a8-8e56-58db37e376ab" (UID: "83eaca34-8d94-48a8-8e56-58db37e376ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.177616 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83eaca34-8d94-48a8-8e56-58db37e376ab" (UID: "83eaca34-8d94-48a8-8e56-58db37e376ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.235583 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvqpn\" (UniqueName: \"kubernetes.io/projected/83eaca34-8d94-48a8-8e56-58db37e376ab-kube-api-access-mvqpn\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.235620 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.235630 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.235640 4679 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83eaca34-8d94-48a8-8e56-58db37e376ab-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.546754 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" event={"ID":"83eaca34-8d94-48a8-8e56-58db37e376ab","Type":"ContainerDied","Data":"f8130f1beeeb7f1a6e2825fb5128affff467a8c040ed5534889dc711bd82e096"} Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.546835 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8130f1beeeb7f1a6e2825fb5128affff467a8c040ed5534889dc711bd82e096" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.546887 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.642529 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9"] Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645080 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645127 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645153 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="extract-content" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645160 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="extract-content" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645171 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="extract-utilities" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645185 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="extract-utilities" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645209 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="extract-utilities" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645215 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="extract-utilities" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645240 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="extract-content" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645248 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="extract-content" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645278 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eaca34-8d94-48a8-8e56-58db37e376ab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645288 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eaca34-8d94-48a8-8e56-58db37e376ab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:33:16 crc kubenswrapper[4679]: E0203 12:33:16.645299 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.645307 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.647015 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f66aa3-544e-41c0-8771-62903e57221c" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.647065 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d01f6cf-05bd-412e-83a9-8b0325c48921" containerName="registry-server" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.647081 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eaca34-8d94-48a8-8e56-58db37e376ab" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.650450 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.653665 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.655705 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.656003 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.660714 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9"] Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.663647 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.746297 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkw4k\" (UniqueName: \"kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.746408 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.746476 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.848497 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.848975 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkw4k\" (UniqueName: \"kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.849003 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.853082 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.853344 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.868464 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkw4k\" (UniqueName: \"kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:16 crc kubenswrapper[4679]: I0203 12:33:16.974671 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:33:17 crc kubenswrapper[4679]: I0203 12:33:17.504780 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9"] Feb 03 12:33:17 crc kubenswrapper[4679]: I0203 12:33:17.558886 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" event={"ID":"aa12b60d-98f3-42a6-b429-cd451b1ec5fc","Type":"ContainerStarted","Data":"11f487bb63cc06c59bffde496a5e9f28f73bc9fef426426a5461601bf833264c"} Feb 03 12:33:19 crc kubenswrapper[4679]: I0203 12:33:19.584984 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" event={"ID":"aa12b60d-98f3-42a6-b429-cd451b1ec5fc","Type":"ContainerStarted","Data":"6acc77b7db0e27c298bc57eeffbcb19bdee1d1960426facebce3d5a356f57bb9"} Feb 03 12:33:19 crc kubenswrapper[4679]: I0203 12:33:19.605988 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" podStartSLOduration=2.747865036 podStartE2EDuration="3.605959902s" podCreationTimestamp="2026-02-03 12:33:16 +0000 UTC" firstStartedPulling="2026-02-03 12:33:17.509616483 +0000 UTC m=+1669.984512571" lastFinishedPulling="2026-02-03 12:33:18.367711349 +0000 UTC m=+1670.842607437" observedRunningTime="2026-02-03 12:33:19.60432621 +0000 UTC m=+1672.079222318" watchObservedRunningTime="2026-02-03 12:33:19.605959902 +0000 UTC m=+1672.080855990" Feb 03 12:33:21 crc kubenswrapper[4679]: I0203 12:33:21.212425 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:33:21 crc kubenswrapper[4679]: E0203 12:33:21.212777 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:33:32 crc kubenswrapper[4679]: I0203 12:33:32.861610 4679 scope.go:117] "RemoveContainer" containerID="19f783f3bb0e53b9312493e4a6fe7224a0a7f436d01360361631f1fe4f307ced" Feb 03 12:33:32 crc kubenswrapper[4679]: I0203 12:33:32.894036 4679 scope.go:117] "RemoveContainer" containerID="a92eeb2c5d3eb30dfd60ee6d9798719b7a76c62fcb55765e146b6eb799e3e3a9" Feb 03 12:33:32 crc kubenswrapper[4679]: I0203 12:33:32.921901 4679 scope.go:117] "RemoveContainer" containerID="e6d5560b9cf46d3d95f66c3555d739c3c59a6eafea3d02913d6e8e8342244cec" Feb 03 12:33:32 crc kubenswrapper[4679]: I0203 12:33:32.941240 4679 scope.go:117] "RemoveContainer" containerID="2114b8cc51f1bc8c4f634b9e01a84a647505145cffd235237d39cca0a12f71cb" Feb 03 12:33:32 crc kubenswrapper[4679]: I0203 12:33:32.967492 4679 scope.go:117] "RemoveContainer" containerID="22621cfc1b54423d2673c789621a5822c9114633c4cf7769520cd5acdaba53c7" Feb 03 12:33:33 crc kubenswrapper[4679]: I0203 12:33:33.003784 4679 scope.go:117] "RemoveContainer" containerID="d16beca63b9def5f0d4d58879296ecb92597a816b12f4c61293dca9b4b247eaa" Feb 03 12:33:34 crc kubenswrapper[4679]: I0203 12:33:34.212044 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:33:34 crc kubenswrapper[4679]: E0203 12:33:34.212593 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:33:47 crc kubenswrapper[4679]: I0203 12:33:47.211876 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:33:47 crc kubenswrapper[4679]: E0203 12:33:47.212664 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.050152 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2dwgx"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.062346 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0d49-account-create-update-dz92r"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.075247 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5s9wx"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.090525 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-efd2-account-create-update-nn9tr"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.101755 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ddd2-account-create-update-8vcg8"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.113030 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5pnsw"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.138023 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ddd2-account-create-update-8vcg8"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.147517 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-efd2-account-create-update-nn9tr"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.157185 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2dwgx"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.167030 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5s9wx"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.179206 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0d49-account-create-update-dz92r"] Feb 03 12:33:51 crc kubenswrapper[4679]: I0203 12:33:51.192685 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5pnsw"] Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.229186 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dac64e9-0810-4130-b75e-9711ce1ab490" path="/var/lib/kubelet/pods/0dac64e9-0810-4130-b75e-9711ce1ab490/volumes" Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.230677 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20466e92-2a82-420b-b597-9040869317ec" path="/var/lib/kubelet/pods/20466e92-2a82-420b-b597-9040869317ec/volumes" Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.231520 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e11319e-fc62-4971-a341-f8c39b7843cb" path="/var/lib/kubelet/pods/2e11319e-fc62-4971-a341-f8c39b7843cb/volumes" Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.232260 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a1137b-9229-42b1-8764-7169cfc309f9" path="/var/lib/kubelet/pods/a2a1137b-9229-42b1-8764-7169cfc309f9/volumes" Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.234001 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8866094-2e6c-4147-ae24-b3051ac32108" path="/var/lib/kubelet/pods/c8866094-2e6c-4147-ae24-b3051ac32108/volumes" Feb 03 12:33:52 crc kubenswrapper[4679]: I0203 12:33:52.234851 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987" path="/var/lib/kubelet/pods/ffb0c8cd-ec7e-4be2-8ef2-c5158bb67987/volumes" Feb 03 12:34:00 crc kubenswrapper[4679]: I0203 12:34:00.211534 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:34:00 crc kubenswrapper[4679]: E0203 12:34:00.212261 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:34:11 crc kubenswrapper[4679]: I0203 12:34:11.048168 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9xc68"] Feb 03 12:34:11 crc kubenswrapper[4679]: I0203 12:34:11.058262 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9xc68"] Feb 03 12:34:12 crc kubenswrapper[4679]: I0203 12:34:12.223852 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72232599-3fc6-423f-a36f-d684a7b77fef" path="/var/lib/kubelet/pods/72232599-3fc6-423f-a36f-d684a7b77fef/volumes" Feb 03 12:34:13 crc kubenswrapper[4679]: I0203 12:34:13.211944 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:34:13 crc kubenswrapper[4679]: E0203 12:34:13.212710 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.032055 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fhzpb"] Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.042169 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-j8w92"] Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.055321 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-q8nng"] Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.067320 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fhzpb"] Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.078143 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-j8w92"] Feb 03 12:34:19 crc kubenswrapper[4679]: I0203 12:34:19.094919 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-q8nng"] Feb 03 12:34:20 crc kubenswrapper[4679]: I0203 12:34:20.227078 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a99d9a-c1b3-43a0-afb7-592a21d29b18" path="/var/lib/kubelet/pods/08a99d9a-c1b3-43a0-afb7-592a21d29b18/volumes" Feb 03 12:34:20 crc kubenswrapper[4679]: I0203 12:34:20.228793 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2420e5-7ea0-4879-9e16-1721fc087527" path="/var/lib/kubelet/pods/2b2420e5-7ea0-4879-9e16-1721fc087527/volumes" Feb 03 12:34:20 crc kubenswrapper[4679]: I0203 12:34:20.229392 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7b8990-8fe4-4e3c-bdf2-41d9039afed2" path="/var/lib/kubelet/pods/9a7b8990-8fe4-4e3c-bdf2-41d9039afed2/volumes" Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.062652 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-flh2f"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.092799 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ba4-account-create-update-rp2sp"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.101531 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6efb-account-create-update-ldrrh"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.109445 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0375-account-create-update-v7bhg"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.118562 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8ba4-account-create-update-rp2sp"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.129099 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0375-account-create-update-v7bhg"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.138056 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-flh2f"] Feb 03 12:34:23 crc kubenswrapper[4679]: I0203 12:34:23.146812 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6efb-account-create-update-ldrrh"] Feb 03 12:34:24 crc kubenswrapper[4679]: I0203 12:34:24.222781 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9b136b-ec91-4486-af62-ec1f49e4e010" path="/var/lib/kubelet/pods/2d9b136b-ec91-4486-af62-ec1f49e4e010/volumes" Feb 03 12:34:24 crc kubenswrapper[4679]: I0203 12:34:24.223843 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b5e1e2-7d27-460b-920e-4d131c25b9ff" path="/var/lib/kubelet/pods/34b5e1e2-7d27-460b-920e-4d131c25b9ff/volumes" Feb 03 12:34:24 crc kubenswrapper[4679]: I0203 12:34:24.224378 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83970ea-96e1-479c-ac08-e26d41f50ed2" path="/var/lib/kubelet/pods/a83970ea-96e1-479c-ac08-e26d41f50ed2/volumes" Feb 03 12:34:24 crc kubenswrapper[4679]: I0203 12:34:24.224869 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab63a515-b3d6-4b33-a9ff-1ed746139a03" path="/var/lib/kubelet/pods/ab63a515-b3d6-4b33-a9ff-1ed746139a03/volumes" Feb 03 12:34:28 crc kubenswrapper[4679]: I0203 12:34:28.219168 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:34:28 crc kubenswrapper[4679]: E0203 12:34:28.219931 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.100380 4679 scope.go:117] "RemoveContainer" containerID="fe3fc7470e1a98184b7765bfcfa9c2cc0f31f06ddda4d8711626fee2017165a0" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.125558 4679 scope.go:117] "RemoveContainer" containerID="42de09521e2201cbf9d16b1a49ad25963236c2d77264be13b5edf6c22c2fbc03" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.146625 4679 scope.go:117] "RemoveContainer" containerID="d32281488958b1b73df3fdda0a906d1af9ccf4066be198a0497b77971418b74c" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.172570 4679 scope.go:117] "RemoveContainer" containerID="592671ead8dfdcd3f46fdef7d5cb7bcbbe8d1e00f4e0101b939950b81f560048" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.218882 4679 scope.go:117] "RemoveContainer" containerID="f5c65c2d45ce008de19161c14559fea430fb3c95e6e13c3cb88bb20c4d1abf1c" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.272101 4679 scope.go:117] "RemoveContainer" containerID="78643914150be9c0f04b6947172cb80ed83a81dca8865ba3f7f1f29983d9bc50" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.328506 4679 scope.go:117] "RemoveContainer" containerID="1d7a2f570f3c87cff1edadccd6a628506dbe84ba079af8773ac342ef6fcab711" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.384948 4679 scope.go:117] "RemoveContainer" containerID="6a77ff6260ecd9ed74cf9e16f8862856ca0ec94b4b2b15752152da588fa30539" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.434261 4679 scope.go:117] "RemoveContainer" containerID="8e3d725f726e589955f04fc1bb8da4ef219ab6a39bf22d24d02fe582d9948de0" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.470843 4679 scope.go:117] "RemoveContainer" containerID="5e511c40dc14bb7e975ef8a30c06a48f2dbdc07d4decd4b63148eea179a40633" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.497627 4679 scope.go:117] "RemoveContainer" containerID="702dfef749bd9bf0497f8de1ebe892d3ea127ea521fedcc0bc24f24ba77bf6e9" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.532804 4679 scope.go:117] "RemoveContainer" containerID="bd655d9835e55fb5c8dc389c02144757a11df78aefc7954e9f383b0b65780705" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.552495 4679 scope.go:117] "RemoveContainer" containerID="49ecdbdbc68d12a4fe1abdfe5c744be5f2b6caf6ca44eedcb3cf3c5d262d3af6" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.579338 4679 scope.go:117] "RemoveContainer" containerID="d210313605fa27b3fc5b64d7354bd1c949fb1a236473054705851e247eb3af44" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.607213 4679 scope.go:117] "RemoveContainer" containerID="bf346f95cc09f5a770e84bbb89414120a7dd474dc62b88cc7f35e817df969490" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.629652 4679 scope.go:117] "RemoveContainer" containerID="2c82de99887a1b41f607a502e596bc05605ed702dbb7ae0cef5e0e22d6a9e70c" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.654227 4679 scope.go:117] "RemoveContainer" containerID="47c5dc908b7b831011fbaea724b9bc94327a992123de3a47abf0977b5f8ee627" Feb 03 12:34:33 crc kubenswrapper[4679]: I0203 12:34:33.672591 4679 scope.go:117] "RemoveContainer" containerID="fd912c10308b73573d22169e4be213c1b1de66a54bd65ab840a438adff1c4127" Feb 03 12:34:39 crc kubenswrapper[4679]: I0203 12:34:39.038719 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-q8h2z"] Feb 03 12:34:39 crc kubenswrapper[4679]: I0203 12:34:39.046454 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-q8h2z"] Feb 03 12:34:40 crc kubenswrapper[4679]: I0203 12:34:40.245430 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4982065-33b7-4840-8c29-2e4507cfe43d" path="/var/lib/kubelet/pods/b4982065-33b7-4840-8c29-2e4507cfe43d/volumes" Feb 03 12:34:43 crc kubenswrapper[4679]: I0203 12:34:43.212037 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:34:43 crc kubenswrapper[4679]: E0203 12:34:43.212532 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:34:54 crc kubenswrapper[4679]: I0203 12:34:54.211590 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:34:54 crc kubenswrapper[4679]: E0203 12:34:54.212622 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:35:05 crc kubenswrapper[4679]: I0203 12:35:05.212037 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:35:05 crc kubenswrapper[4679]: E0203 12:35:05.213037 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:35:06 crc kubenswrapper[4679]: I0203 12:35:06.680081 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" event={"ID":"aa12b60d-98f3-42a6-b429-cd451b1ec5fc","Type":"ContainerDied","Data":"6acc77b7db0e27c298bc57eeffbcb19bdee1d1960426facebce3d5a356f57bb9"} Feb 03 12:35:06 crc kubenswrapper[4679]: I0203 12:35:06.680645 4679 generic.go:334] "Generic (PLEG): container finished" podID="aa12b60d-98f3-42a6-b429-cd451b1ec5fc" containerID="6acc77b7db0e27c298bc57eeffbcb19bdee1d1960426facebce3d5a356f57bb9" exitCode=0 Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.127486 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.180696 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkw4k\" (UniqueName: \"kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k\") pod \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.180793 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory\") pod \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.180853 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam\") pod \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\" (UID: \"aa12b60d-98f3-42a6-b429-cd451b1ec5fc\") " Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.188373 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k" (OuterVolumeSpecName: "kube-api-access-kkw4k") pod "aa12b60d-98f3-42a6-b429-cd451b1ec5fc" (UID: "aa12b60d-98f3-42a6-b429-cd451b1ec5fc"). InnerVolumeSpecName "kube-api-access-kkw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.257467 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory" (OuterVolumeSpecName: "inventory") pod "aa12b60d-98f3-42a6-b429-cd451b1ec5fc" (UID: "aa12b60d-98f3-42a6-b429-cd451b1ec5fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.273102 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa12b60d-98f3-42a6-b429-cd451b1ec5fc" (UID: "aa12b60d-98f3-42a6-b429-cd451b1ec5fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.283974 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkw4k\" (UniqueName: \"kubernetes.io/projected/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-kube-api-access-kkw4k\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.284014 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.284024 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa12b60d-98f3-42a6-b429-cd451b1ec5fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.699976 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" event={"ID":"aa12b60d-98f3-42a6-b429-cd451b1ec5fc","Type":"ContainerDied","Data":"11f487bb63cc06c59bffde496a5e9f28f73bc9fef426426a5461601bf833264c"} Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.700559 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f487bb63cc06c59bffde496a5e9f28f73bc9fef426426a5461601bf833264c" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.700060 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.785483 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc"] Feb 03 12:35:08 crc kubenswrapper[4679]: E0203 12:35:08.785990 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa12b60d-98f3-42a6-b429-cd451b1ec5fc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.786010 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa12b60d-98f3-42a6-b429-cd451b1ec5fc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.786206 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa12b60d-98f3-42a6-b429-cd451b1ec5fc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.786852 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.788544 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.789996 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.790077 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.792846 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.803492 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc"] Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.897672 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.897766 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnn2\" (UniqueName: \"kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.897847 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.999591 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqnn2\" (UniqueName: \"kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:08 crc kubenswrapper[4679]: I0203 12:35:08.999746 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:08.999856 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.004864 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.013922 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.019308 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqnn2\" (UniqueName: \"kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xmthc\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.109861 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.631340 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc"] Feb 03 12:35:09 crc kubenswrapper[4679]: W0203 12:35:09.635019 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21b7ed40_7c17_44bd_9ad8_f47f21ea4e84.slice/crio-451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101 WatchSource:0}: Error finding container 451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101: Status 404 returned error can't find the container with id 451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101 Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.639958 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:35:09 crc kubenswrapper[4679]: I0203 12:35:09.709455 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" event={"ID":"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84","Type":"ContainerStarted","Data":"451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101"} Feb 03 12:35:10 crc kubenswrapper[4679]: I0203 12:35:10.722283 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" event={"ID":"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84","Type":"ContainerStarted","Data":"2f998ff8807005e98dd5957eb6799e3d9ad58fe2d426fc0e302c6ffe804d5b80"} Feb 03 12:35:10 crc kubenswrapper[4679]: I0203 12:35:10.751564 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" podStartSLOduration=2.025904884 podStartE2EDuration="2.751532769s" podCreationTimestamp="2026-02-03 12:35:08 +0000 UTC" firstStartedPulling="2026-02-03 12:35:09.639600877 +0000 UTC m=+1782.114496965" lastFinishedPulling="2026-02-03 12:35:10.365228762 +0000 UTC m=+1782.840124850" observedRunningTime="2026-02-03 12:35:10.744220694 +0000 UTC m=+1783.219116782" watchObservedRunningTime="2026-02-03 12:35:10.751532769 +0000 UTC m=+1783.226428877" Feb 03 12:35:15 crc kubenswrapper[4679]: I0203 12:35:15.056200 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-m9g7v"] Feb 03 12:35:15 crc kubenswrapper[4679]: I0203 12:35:15.066379 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-m9g7v"] Feb 03 12:35:16 crc kubenswrapper[4679]: I0203 12:35:16.224142 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc594779-2b21-4b8c-8fc6-a2f51273089d" path="/var/lib/kubelet/pods/cc594779-2b21-4b8c-8fc6-a2f51273089d/volumes" Feb 03 12:35:17 crc kubenswrapper[4679]: I0203 12:35:17.212253 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:35:17 crc kubenswrapper[4679]: E0203 12:35:17.212846 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.067606 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qg7br"] Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.104558 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rrkcf"] Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.128650 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qg7br"] Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.149919 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rrkcf"] Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.224675 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de98726-0c88-46ae-9df5-fd6d031233f4" path="/var/lib/kubelet/pods/1de98726-0c88-46ae-9df5-fd6d031233f4/volumes" Feb 03 12:35:22 crc kubenswrapper[4679]: I0203 12:35:22.225569 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceafb034-bf62-4347-943f-622426408bb5" path="/var/lib/kubelet/pods/ceafb034-bf62-4347-943f-622426408bb5/volumes" Feb 03 12:35:29 crc kubenswrapper[4679]: I0203 12:35:29.036799 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vdgvw"] Feb 03 12:35:29 crc kubenswrapper[4679]: I0203 12:35:29.049053 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vdgvw"] Feb 03 12:35:30 crc kubenswrapper[4679]: I0203 12:35:30.224390 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e98bf9-342d-44dc-9742-1a732178eebd" path="/var/lib/kubelet/pods/c0e98bf9-342d-44dc-9742-1a732178eebd/volumes" Feb 03 12:35:31 crc kubenswrapper[4679]: I0203 12:35:31.212147 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:35:31 crc kubenswrapper[4679]: E0203 12:35:31.212544 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:35:33 crc kubenswrapper[4679]: I0203 12:35:33.940404 4679 scope.go:117] "RemoveContainer" containerID="fdc5a95638ddc022b29a636131da9b5cf00ef384551a808b1bb943d5da051f33" Feb 03 12:35:33 crc kubenswrapper[4679]: I0203 12:35:33.989687 4679 scope.go:117] "RemoveContainer" containerID="ea8d8a7595e1aa053a5c7d9f2baf2eb80cb6d10026656a3bab0d685819910190" Feb 03 12:35:34 crc kubenswrapper[4679]: I0203 12:35:34.036905 4679 scope.go:117] "RemoveContainer" containerID="9ed62df418c77323446252ab9bceb5da54ee0acbb7f65eb97cdee2ebdcbb8ec0" Feb 03 12:35:34 crc kubenswrapper[4679]: I0203 12:35:34.076841 4679 scope.go:117] "RemoveContainer" containerID="2425f3bfa5f0fd45afdd47d687d4748f219aca10d2741a9272211f069aab10f8" Feb 03 12:35:34 crc kubenswrapper[4679]: I0203 12:35:34.129968 4679 scope.go:117] "RemoveContainer" containerID="408c5f73d6a80852df6ff82ec3474d87bca4d6aa3a34678972fbf1adf820ed0a" Feb 03 12:35:40 crc kubenswrapper[4679]: I0203 12:35:40.044430 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7n6pw"] Feb 03 12:35:40 crc kubenswrapper[4679]: I0203 12:35:40.054673 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7n6pw"] Feb 03 12:35:40 crc kubenswrapper[4679]: I0203 12:35:40.229410 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9ca558-ad13-4599-80e5-05be55c84a55" path="/var/lib/kubelet/pods/bc9ca558-ad13-4599-80e5-05be55c84a55/volumes" Feb 03 12:35:43 crc kubenswrapper[4679]: I0203 12:35:43.211830 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:35:43 crc kubenswrapper[4679]: E0203 12:35:43.212946 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:35:57 crc kubenswrapper[4679]: I0203 12:35:57.211863 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:35:57 crc kubenswrapper[4679]: E0203 12:35:57.212907 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:36:09 crc kubenswrapper[4679]: I0203 12:36:09.212668 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:36:09 crc kubenswrapper[4679]: E0203 12:36:09.213480 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.484268 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.486561 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.509041 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.606003 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfcqk\" (UniqueName: \"kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.606449 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.606790 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.709143 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.709534 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfcqk\" (UniqueName: \"kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.709646 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.709883 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.710120 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.731287 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfcqk\" (UniqueName: \"kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk\") pod \"certified-operators-7gv97\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:16 crc kubenswrapper[4679]: I0203 12:36:16.811663 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:17 crc kubenswrapper[4679]: I0203 12:36:17.305518 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:17 crc kubenswrapper[4679]: I0203 12:36:17.349887 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerStarted","Data":"41e928ec1cf940799114009376c86379e1dbb1755779814a72ae1070a361f022"} Feb 03 12:36:18 crc kubenswrapper[4679]: I0203 12:36:18.360821 4679 generic.go:334] "Generic (PLEG): container finished" podID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerID="a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91" exitCode=0 Feb 03 12:36:18 crc kubenswrapper[4679]: I0203 12:36:18.361058 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerDied","Data":"a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91"} Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.050180 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6cwkq"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.065168 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-90d7-account-create-update-zdz8k"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.075851 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jwvqh"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.084148 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ec45-account-create-update-4kxqf"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.091776 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6cwkq"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.100395 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vmpqh"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.109097 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-73aa-account-create-update-pmdvr"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.117229 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jwvqh"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.125309 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-90d7-account-create-update-zdz8k"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.134335 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ec45-account-create-update-4kxqf"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.144594 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vmpqh"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.155908 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-73aa-account-create-update-pmdvr"] Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.223801 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c7b843-97ec-45e7-b87a-cff6549aee8a" path="/var/lib/kubelet/pods/27c7b843-97ec-45e7-b87a-cff6549aee8a/volumes" Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.224389 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60aa7052-469c-4202-83c1-780e52588e83" path="/var/lib/kubelet/pods/60aa7052-469c-4202-83c1-780e52588e83/volumes" Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.225010 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3d52df-3f8a-4ba1-97eb-889f68e40cae" path="/var/lib/kubelet/pods/ac3d52df-3f8a-4ba1-97eb-889f68e40cae/volumes" Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.225756 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e" path="/var/lib/kubelet/pods/bd97c6fe-33e7-4937-84f0-d0bb9d2aaa8e/volumes" Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.226782 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0696ddd-2b30-4e81-954c-9219fa89b5f8" path="/var/lib/kubelet/pods/e0696ddd-2b30-4e81-954c-9219fa89b5f8/volumes" Feb 03 12:36:20 crc kubenswrapper[4679]: I0203 12:36:20.227298 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0ba804-303f-44b9-8ba0-68278fee0f17" path="/var/lib/kubelet/pods/ec0ba804-303f-44b9-8ba0-68278fee0f17/volumes" Feb 03 12:36:22 crc kubenswrapper[4679]: I0203 12:36:22.212003 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:36:22 crc kubenswrapper[4679]: E0203 12:36:22.212610 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:36:22 crc kubenswrapper[4679]: I0203 12:36:22.397196 4679 generic.go:334] "Generic (PLEG): container finished" podID="21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" containerID="2f998ff8807005e98dd5957eb6799e3d9ad58fe2d426fc0e302c6ffe804d5b80" exitCode=0 Feb 03 12:36:22 crc kubenswrapper[4679]: I0203 12:36:22.397543 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" event={"ID":"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84","Type":"ContainerDied","Data":"2f998ff8807005e98dd5957eb6799e3d9ad58fe2d426fc0e302c6ffe804d5b80"} Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.817033 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.949718 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory\") pod \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.950209 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqnn2\" (UniqueName: \"kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2\") pod \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.950317 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam\") pod \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\" (UID: \"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84\") " Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.955642 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2" (OuterVolumeSpecName: "kube-api-access-qqnn2") pod "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" (UID: "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84"). InnerVolumeSpecName "kube-api-access-qqnn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.977930 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory" (OuterVolumeSpecName: "inventory") pod "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" (UID: "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:23 crc kubenswrapper[4679]: I0203 12:36:23.978547 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" (UID: "21b7ed40-7c17-44bd-9ad8-f47f21ea4e84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.052623 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.052693 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.052705 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqnn2\" (UniqueName: \"kubernetes.io/projected/21b7ed40-7c17-44bd-9ad8-f47f21ea4e84-kube-api-access-qqnn2\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.426738 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" event={"ID":"21b7ed40-7c17-44bd-9ad8-f47f21ea4e84","Type":"ContainerDied","Data":"451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101"} Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.426796 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451629a4a40b47d5e2249f2e8cc26015d52547e840ab444fdb5f5152c3151101" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.426799 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xmthc" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.429855 4679 generic.go:334] "Generic (PLEG): container finished" podID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerID="925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20" exitCode=0 Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.429915 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerDied","Data":"925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20"} Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.515518 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz"] Feb 03 12:36:24 crc kubenswrapper[4679]: E0203 12:36:24.516044 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.516074 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.516310 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b7ed40-7c17-44bd-9ad8-f47f21ea4e84" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.517263 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.520906 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.521268 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.521444 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.521686 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.538120 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz"] Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.565541 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.566497 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bp62\" (UniqueName: \"kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.567118 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.668970 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.669335 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bp62\" (UniqueName: \"kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.670301 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.673865 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.676750 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.688432 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bp62\" (UniqueName: \"kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:24 crc kubenswrapper[4679]: I0203 12:36:24.837670 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:25 crc kubenswrapper[4679]: I0203 12:36:25.348979 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz"] Feb 03 12:36:25 crc kubenswrapper[4679]: W0203 12:36:25.356605 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b077848_9e84_4914_83b8_d47ebe659982.slice/crio-f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718 WatchSource:0}: Error finding container f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718: Status 404 returned error can't find the container with id f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718 Feb 03 12:36:25 crc kubenswrapper[4679]: I0203 12:36:25.438923 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" event={"ID":"3b077848-9e84-4914-83b8-d47ebe659982","Type":"ContainerStarted","Data":"f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718"} Feb 03 12:36:25 crc kubenswrapper[4679]: I0203 12:36:25.442177 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerStarted","Data":"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62"} Feb 03 12:36:25 crc kubenswrapper[4679]: I0203 12:36:25.467062 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gv97" podStartSLOduration=2.752785734 podStartE2EDuration="9.467038951s" podCreationTimestamp="2026-02-03 12:36:16 +0000 UTC" firstStartedPulling="2026-02-03 12:36:18.363581224 +0000 UTC m=+1850.838477332" lastFinishedPulling="2026-02-03 12:36:25.077834461 +0000 UTC m=+1857.552730549" observedRunningTime="2026-02-03 12:36:25.46025709 +0000 UTC m=+1857.935153178" watchObservedRunningTime="2026-02-03 12:36:25.467038951 +0000 UTC m=+1857.941935039" Feb 03 12:36:26 crc kubenswrapper[4679]: I0203 12:36:26.452912 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" event={"ID":"3b077848-9e84-4914-83b8-d47ebe659982","Type":"ContainerStarted","Data":"4f620ef86ffb31787bdf6543e7170a25b7ec78953fb58dd4b51f08b9ec233a93"} Feb 03 12:36:26 crc kubenswrapper[4679]: I0203 12:36:26.475335 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" podStartSLOduration=1.9812671609999999 podStartE2EDuration="2.475308941s" podCreationTimestamp="2026-02-03 12:36:24 +0000 UTC" firstStartedPulling="2026-02-03 12:36:25.358571049 +0000 UTC m=+1857.833467137" lastFinishedPulling="2026-02-03 12:36:25.852612829 +0000 UTC m=+1858.327508917" observedRunningTime="2026-02-03 12:36:26.471258979 +0000 UTC m=+1858.946155077" watchObservedRunningTime="2026-02-03 12:36:26.475308941 +0000 UTC m=+1858.950205029" Feb 03 12:36:26 crc kubenswrapper[4679]: I0203 12:36:26.812143 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:26 crc kubenswrapper[4679]: I0203 12:36:26.812496 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:26 crc kubenswrapper[4679]: I0203 12:36:26.861204 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:31 crc kubenswrapper[4679]: I0203 12:36:31.495072 4679 generic.go:334] "Generic (PLEG): container finished" podID="3b077848-9e84-4914-83b8-d47ebe659982" containerID="4f620ef86ffb31787bdf6543e7170a25b7ec78953fb58dd4b51f08b9ec233a93" exitCode=0 Feb 03 12:36:31 crc kubenswrapper[4679]: I0203 12:36:31.495169 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" event={"ID":"3b077848-9e84-4914-83b8-d47ebe659982","Type":"ContainerDied","Data":"4f620ef86ffb31787bdf6543e7170a25b7ec78953fb58dd4b51f08b9ec233a93"} Feb 03 12:36:32 crc kubenswrapper[4679]: I0203 12:36:32.945404 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.043631 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bp62\" (UniqueName: \"kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62\") pod \"3b077848-9e84-4914-83b8-d47ebe659982\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.043735 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam\") pod \"3b077848-9e84-4914-83b8-d47ebe659982\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.043867 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory\") pod \"3b077848-9e84-4914-83b8-d47ebe659982\" (UID: \"3b077848-9e84-4914-83b8-d47ebe659982\") " Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.050655 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62" (OuterVolumeSpecName: "kube-api-access-5bp62") pod "3b077848-9e84-4914-83b8-d47ebe659982" (UID: "3b077848-9e84-4914-83b8-d47ebe659982"). InnerVolumeSpecName "kube-api-access-5bp62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.097910 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b077848-9e84-4914-83b8-d47ebe659982" (UID: "3b077848-9e84-4914-83b8-d47ebe659982"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.104205 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory" (OuterVolumeSpecName: "inventory") pod "3b077848-9e84-4914-83b8-d47ebe659982" (UID: "3b077848-9e84-4914-83b8-d47ebe659982"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.145899 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bp62\" (UniqueName: \"kubernetes.io/projected/3b077848-9e84-4914-83b8-d47ebe659982-kube-api-access-5bp62\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.145953 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.145964 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b077848-9e84-4914-83b8-d47ebe659982-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.511635 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" event={"ID":"3b077848-9e84-4914-83b8-d47ebe659982","Type":"ContainerDied","Data":"f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718"} Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.511686 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28658ed190855647a64b7db244650fdd90b68bf417fe3731dae3acd72b37718" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.511702 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.596511 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf"] Feb 03 12:36:33 crc kubenswrapper[4679]: E0203 12:36:33.597207 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b077848-9e84-4914-83b8-d47ebe659982" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.597229 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b077848-9e84-4914-83b8-d47ebe659982" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.597442 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b077848-9e84-4914-83b8-d47ebe659982" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.598083 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.599971 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.600171 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.600809 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.601026 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.611635 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf"] Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.655347 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2n8r\" (UniqueName: \"kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.655654 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.655877 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.758495 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2n8r\" (UniqueName: \"kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.758658 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.758863 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.766922 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.771111 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.780260 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2n8r\" (UniqueName: \"kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-whhrf\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:33 crc kubenswrapper[4679]: I0203 12:36:33.914681 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.324627 4679 scope.go:117] "RemoveContainer" containerID="625a09bdfac9c2afdfdaf0455998fdc19b93c8c88b50d2b4024909116e592bb6" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.382214 4679 scope.go:117] "RemoveContainer" containerID="0e379761e057915029ca52b8cfb63668fdef36bb93cd5f451e059ae89c51ab98" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.434346 4679 scope.go:117] "RemoveContainer" containerID="1e1dc7ac98104483e7b2d3ebaf36bfcd1b236b7ecda820ceb9a97929d66f4c76" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.465869 4679 scope.go:117] "RemoveContainer" containerID="cc5a8b11a553d985dbdc1e2beaf773dfd98276809f0fe0b93032f18290358f01" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.485678 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf"] Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.487749 4679 scope.go:117] "RemoveContainer" containerID="d34823c3d502d9c677b71c45f4cba003d692b8498dc02716f3cbf5d71c0a2fd7" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.511728 4679 scope.go:117] "RemoveContainer" containerID="7bc65ae5f2391d7814e5acedf8737a5f2d7a056e609920b0206a3512091dd5a5" Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.526421 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" event={"ID":"2399747b-7fec-4916-8a58-13a53de36d78","Type":"ContainerStarted","Data":"954e87659c37755e7317a14d23ddda9fed9a902f4fccdea7ebd2925964cabe62"} Feb 03 12:36:34 crc kubenswrapper[4679]: I0203 12:36:34.549528 4679 scope.go:117] "RemoveContainer" containerID="b34d0695bbd585feab8d19f495117500923fc21dc4cdf9f375e81f4f0c34ad44" Feb 03 12:36:36 crc kubenswrapper[4679]: I0203 12:36:36.544615 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" event={"ID":"2399747b-7fec-4916-8a58-13a53de36d78","Type":"ContainerStarted","Data":"7a2febc23299637debd115f64466cb4334d02a532eb74e27c3c8cc488f681572"} Feb 03 12:36:36 crc kubenswrapper[4679]: I0203 12:36:36.561028 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" podStartSLOduration=1.918528359 podStartE2EDuration="3.561000334s" podCreationTimestamp="2026-02-03 12:36:33 +0000 UTC" firstStartedPulling="2026-02-03 12:36:34.496934341 +0000 UTC m=+1866.971830439" lastFinishedPulling="2026-02-03 12:36:36.139406326 +0000 UTC m=+1868.614302414" observedRunningTime="2026-02-03 12:36:36.559678421 +0000 UTC m=+1869.034574509" watchObservedRunningTime="2026-02-03 12:36:36.561000334 +0000 UTC m=+1869.035896423" Feb 03 12:36:36 crc kubenswrapper[4679]: I0203 12:36:36.864214 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:36 crc kubenswrapper[4679]: I0203 12:36:36.933378 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:37 crc kubenswrapper[4679]: I0203 12:36:37.211635 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:36:37 crc kubenswrapper[4679]: E0203 12:36:37.211948 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:36:37 crc kubenswrapper[4679]: I0203 12:36:37.553157 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gv97" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="registry-server" containerID="cri-o://6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62" gracePeriod=2 Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.510735 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.561133 4679 generic.go:334] "Generic (PLEG): container finished" podID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerID="6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62" exitCode=0 Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.561168 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerDied","Data":"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62"} Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.561194 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gv97" event={"ID":"9b319e18-5838-4e21-b5c5-6e670a1667f8","Type":"ContainerDied","Data":"41e928ec1cf940799114009376c86379e1dbb1755779814a72ae1070a361f022"} Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.561210 4679 scope.go:117] "RemoveContainer" containerID="6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.561323 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gv97" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.580285 4679 scope.go:117] "RemoveContainer" containerID="925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.580878 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities\") pod \"9b319e18-5838-4e21-b5c5-6e670a1667f8\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.580949 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content\") pod \"9b319e18-5838-4e21-b5c5-6e670a1667f8\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.580970 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfcqk\" (UniqueName: \"kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk\") pod \"9b319e18-5838-4e21-b5c5-6e670a1667f8\" (UID: \"9b319e18-5838-4e21-b5c5-6e670a1667f8\") " Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.581845 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities" (OuterVolumeSpecName: "utilities") pod "9b319e18-5838-4e21-b5c5-6e670a1667f8" (UID: "9b319e18-5838-4e21-b5c5-6e670a1667f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.588554 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk" (OuterVolumeSpecName: "kube-api-access-bfcqk") pod "9b319e18-5838-4e21-b5c5-6e670a1667f8" (UID: "9b319e18-5838-4e21-b5c5-6e670a1667f8"). InnerVolumeSpecName "kube-api-access-bfcqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.640340 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b319e18-5838-4e21-b5c5-6e670a1667f8" (UID: "9b319e18-5838-4e21-b5c5-6e670a1667f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.650905 4679 scope.go:117] "RemoveContainer" containerID="a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.683415 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.683459 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b319e18-5838-4e21-b5c5-6e670a1667f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.683492 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfcqk\" (UniqueName: \"kubernetes.io/projected/9b319e18-5838-4e21-b5c5-6e670a1667f8-kube-api-access-bfcqk\") on node \"crc\" DevicePath \"\"" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.701537 4679 scope.go:117] "RemoveContainer" containerID="6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62" Feb 03 12:36:38 crc kubenswrapper[4679]: E0203 12:36:38.701937 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62\": container with ID starting with 6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62 not found: ID does not exist" containerID="6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.701976 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62"} err="failed to get container status \"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62\": rpc error: code = NotFound desc = could not find container \"6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62\": container with ID starting with 6e8f00c725bfa83fe9e6dba906951025a204f84d1c0b191ce53250fe4162fb62 not found: ID does not exist" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.702003 4679 scope.go:117] "RemoveContainer" containerID="925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20" Feb 03 12:36:38 crc kubenswrapper[4679]: E0203 12:36:38.702228 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20\": container with ID starting with 925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20 not found: ID does not exist" containerID="925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.702257 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20"} err="failed to get container status \"925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20\": rpc error: code = NotFound desc = could not find container \"925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20\": container with ID starting with 925a476f7dd4755314bd10c73c4e138b07ce357a1cf235ea21e6cb49f7195c20 not found: ID does not exist" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.702275 4679 scope.go:117] "RemoveContainer" containerID="a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91" Feb 03 12:36:38 crc kubenswrapper[4679]: E0203 12:36:38.702546 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91\": container with ID starting with a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91 not found: ID does not exist" containerID="a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.702568 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91"} err="failed to get container status \"a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91\": rpc error: code = NotFound desc = could not find container \"a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91\": container with ID starting with a3974c71c9e46bb46e24854e6c473a9faad6d50187f4c7f33a7ccf6097023c91 not found: ID does not exist" Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.896920 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:38 crc kubenswrapper[4679]: I0203 12:36:38.905152 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gv97"] Feb 03 12:36:40 crc kubenswrapper[4679]: I0203 12:36:40.227190 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" path="/var/lib/kubelet/pods/9b319e18-5838-4e21-b5c5-6e670a1667f8/volumes" Feb 03 12:36:46 crc kubenswrapper[4679]: I0203 12:36:46.057182 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bx7p9"] Feb 03 12:36:46 crc kubenswrapper[4679]: I0203 12:36:46.070015 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bx7p9"] Feb 03 12:36:46 crc kubenswrapper[4679]: I0203 12:36:46.233939 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217fbaf0-f384-44f2-a7ec-07fbc5eb38a5" path="/var/lib/kubelet/pods/217fbaf0-f384-44f2-a7ec-07fbc5eb38a5/volumes" Feb 03 12:36:52 crc kubenswrapper[4679]: I0203 12:36:52.213671 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:36:52 crc kubenswrapper[4679]: E0203 12:36:52.214566 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:06 crc kubenswrapper[4679]: I0203 12:37:06.212116 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:37:06 crc kubenswrapper[4679]: E0203 12:37:06.213133 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:10 crc kubenswrapper[4679]: I0203 12:37:10.047704 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-js858"] Feb 03 12:37:10 crc kubenswrapper[4679]: I0203 12:37:10.056152 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-js858"] Feb 03 12:37:10 crc kubenswrapper[4679]: I0203 12:37:10.230697 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f" path="/var/lib/kubelet/pods/30a8dcc4-d726-4b2c-b22d-bd9d3ba48c0f/volumes" Feb 03 12:37:10 crc kubenswrapper[4679]: I0203 12:37:10.854656 4679 generic.go:334] "Generic (PLEG): container finished" podID="2399747b-7fec-4916-8a58-13a53de36d78" containerID="7a2febc23299637debd115f64466cb4334d02a532eb74e27c3c8cc488f681572" exitCode=0 Feb 03 12:37:10 crc kubenswrapper[4679]: I0203 12:37:10.854773 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" event={"ID":"2399747b-7fec-4916-8a58-13a53de36d78","Type":"ContainerDied","Data":"7a2febc23299637debd115f64466cb4334d02a532eb74e27c3c8cc488f681572"} Feb 03 12:37:11 crc kubenswrapper[4679]: I0203 12:37:11.034131 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqr7h"] Feb 03 12:37:11 crc kubenswrapper[4679]: I0203 12:37:11.041733 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqr7h"] Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.223967 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5" path="/var/lib/kubelet/pods/ab8b88a0-ed8f-4446-ac3c-dc76a0c191b5/volumes" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.288223 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.478783 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2n8r\" (UniqueName: \"kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r\") pod \"2399747b-7fec-4916-8a58-13a53de36d78\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.478910 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam\") pod \"2399747b-7fec-4916-8a58-13a53de36d78\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.478966 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory\") pod \"2399747b-7fec-4916-8a58-13a53de36d78\" (UID: \"2399747b-7fec-4916-8a58-13a53de36d78\") " Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.485683 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r" (OuterVolumeSpecName: "kube-api-access-m2n8r") pod "2399747b-7fec-4916-8a58-13a53de36d78" (UID: "2399747b-7fec-4916-8a58-13a53de36d78"). InnerVolumeSpecName "kube-api-access-m2n8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.514776 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory" (OuterVolumeSpecName: "inventory") pod "2399747b-7fec-4916-8a58-13a53de36d78" (UID: "2399747b-7fec-4916-8a58-13a53de36d78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.521160 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2399747b-7fec-4916-8a58-13a53de36d78" (UID: "2399747b-7fec-4916-8a58-13a53de36d78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.580872 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2n8r\" (UniqueName: \"kubernetes.io/projected/2399747b-7fec-4916-8a58-13a53de36d78-kube-api-access-m2n8r\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.580905 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.580916 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2399747b-7fec-4916-8a58-13a53de36d78-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.874308 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" event={"ID":"2399747b-7fec-4916-8a58-13a53de36d78","Type":"ContainerDied","Data":"954e87659c37755e7317a14d23ddda9fed9a902f4fccdea7ebd2925964cabe62"} Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.874679 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954e87659c37755e7317a14d23ddda9fed9a902f4fccdea7ebd2925964cabe62" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.874735 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-whhrf" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.968452 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4"] Feb 03 12:37:12 crc kubenswrapper[4679]: E0203 12:37:12.968898 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="extract-utilities" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.968922 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="extract-utilities" Feb 03 12:37:12 crc kubenswrapper[4679]: E0203 12:37:12.968946 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2399747b-7fec-4916-8a58-13a53de36d78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.968959 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="2399747b-7fec-4916-8a58-13a53de36d78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:37:12 crc kubenswrapper[4679]: E0203 12:37:12.968982 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="registry-server" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.968989 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="registry-server" Feb 03 12:37:12 crc kubenswrapper[4679]: E0203 12:37:12.969012 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="extract-content" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.969019 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="extract-content" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.969439 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b319e18-5838-4e21-b5c5-6e670a1667f8" containerName="registry-server" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.969473 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="2399747b-7fec-4916-8a58-13a53de36d78" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.970255 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.974673 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.974696 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.974946 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.975080 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:37:12 crc kubenswrapper[4679]: I0203 12:37:12.987884 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4"] Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.090805 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.090902 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.091082 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65gc\" (UniqueName: \"kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.193286 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65gc\" (UniqueName: \"kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.193500 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.193553 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.199092 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.208728 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.210827 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65gc\" (UniqueName: \"kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.295016 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.832896 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4"] Feb 03 12:37:13 crc kubenswrapper[4679]: I0203 12:37:13.884063 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" event={"ID":"4d6011df-83a4-4d86-ac66-61b00cd615d4","Type":"ContainerStarted","Data":"b747efb6ef9cecfdc38207bd1275b25d3ae1f46fdab064a4ac69886aaa25f437"} Feb 03 12:37:14 crc kubenswrapper[4679]: I0203 12:37:14.894211 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" event={"ID":"4d6011df-83a4-4d86-ac66-61b00cd615d4","Type":"ContainerStarted","Data":"f5774b4a2e1405d5f974f50aeb64a363dedf14ebd820f5b52807398bf712dd93"} Feb 03 12:37:14 crc kubenswrapper[4679]: I0203 12:37:14.913596 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" podStartSLOduration=2.4741011520000002 podStartE2EDuration="2.913570823s" podCreationTimestamp="2026-02-03 12:37:12 +0000 UTC" firstStartedPulling="2026-02-03 12:37:13.838398061 +0000 UTC m=+1906.313294149" lastFinishedPulling="2026-02-03 12:37:14.277867722 +0000 UTC m=+1906.752763820" observedRunningTime="2026-02-03 12:37:14.910409483 +0000 UTC m=+1907.385305581" watchObservedRunningTime="2026-02-03 12:37:14.913570823 +0000 UTC m=+1907.388466911" Feb 03 12:37:21 crc kubenswrapper[4679]: I0203 12:37:21.211768 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:37:21 crc kubenswrapper[4679]: E0203 12:37:21.212662 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:33 crc kubenswrapper[4679]: I0203 12:37:33.213347 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:37:33 crc kubenswrapper[4679]: E0203 12:37:33.214408 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:34 crc kubenswrapper[4679]: I0203 12:37:34.891585 4679 scope.go:117] "RemoveContainer" containerID="444d740cad8ef7944e636fc1d6d2ed31382209228fac6ebce3653ededa713933" Feb 03 12:37:34 crc kubenswrapper[4679]: I0203 12:37:34.946390 4679 scope.go:117] "RemoveContainer" containerID="ca3a6340e71b1870165d2f67e1c402614fcd2a2305a8a088f5e06da07095f91f" Feb 03 12:37:35 crc kubenswrapper[4679]: I0203 12:37:35.016818 4679 scope.go:117] "RemoveContainer" containerID="bdec05c6e8bb0daab322f0eed2cd57bef2c6004a7dcc17a1842022c6304ce8dc" Feb 03 12:37:44 crc kubenswrapper[4679]: I0203 12:37:44.212001 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:37:44 crc kubenswrapper[4679]: E0203 12:37:44.212717 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:55 crc kubenswrapper[4679]: I0203 12:37:55.046652 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-722gz"] Feb 03 12:37:55 crc kubenswrapper[4679]: I0203 12:37:55.055235 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-722gz"] Feb 03 12:37:56 crc kubenswrapper[4679]: I0203 12:37:56.212166 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:37:56 crc kubenswrapper[4679]: E0203 12:37:56.212777 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:37:56 crc kubenswrapper[4679]: I0203 12:37:56.224559 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313bf325-bcb3-47af-9916-3e441aa0754a" path="/var/lib/kubelet/pods/313bf325-bcb3-47af-9916-3e441aa0754a/volumes" Feb 03 12:38:02 crc kubenswrapper[4679]: I0203 12:38:02.302482 4679 generic.go:334] "Generic (PLEG): container finished" podID="4d6011df-83a4-4d86-ac66-61b00cd615d4" containerID="f5774b4a2e1405d5f974f50aeb64a363dedf14ebd820f5b52807398bf712dd93" exitCode=0 Feb 03 12:38:02 crc kubenswrapper[4679]: I0203 12:38:02.302570 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" event={"ID":"4d6011df-83a4-4d86-ac66-61b00cd615d4","Type":"ContainerDied","Data":"f5774b4a2e1405d5f974f50aeb64a363dedf14ebd820f5b52807398bf712dd93"} Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.735663 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.894049 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory\") pod \"4d6011df-83a4-4d86-ac66-61b00cd615d4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.894205 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j65gc\" (UniqueName: \"kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc\") pod \"4d6011df-83a4-4d86-ac66-61b00cd615d4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.894274 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam\") pod \"4d6011df-83a4-4d86-ac66-61b00cd615d4\" (UID: \"4d6011df-83a4-4d86-ac66-61b00cd615d4\") " Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.900159 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc" (OuterVolumeSpecName: "kube-api-access-j65gc") pod "4d6011df-83a4-4d86-ac66-61b00cd615d4" (UID: "4d6011df-83a4-4d86-ac66-61b00cd615d4"). InnerVolumeSpecName "kube-api-access-j65gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.921266 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory" (OuterVolumeSpecName: "inventory") pod "4d6011df-83a4-4d86-ac66-61b00cd615d4" (UID: "4d6011df-83a4-4d86-ac66-61b00cd615d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.921783 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d6011df-83a4-4d86-ac66-61b00cd615d4" (UID: "4d6011df-83a4-4d86-ac66-61b00cd615d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.998472 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.998508 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6011df-83a4-4d86-ac66-61b00cd615d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:03 crc kubenswrapper[4679]: I0203 12:38:03.998518 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j65gc\" (UniqueName: \"kubernetes.io/projected/4d6011df-83a4-4d86-ac66-61b00cd615d4-kube-api-access-j65gc\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.322892 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.322855 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4" event={"ID":"4d6011df-83a4-4d86-ac66-61b00cd615d4","Type":"ContainerDied","Data":"b747efb6ef9cecfdc38207bd1275b25d3ae1f46fdab064a4ac69886aaa25f437"} Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.323043 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b747efb6ef9cecfdc38207bd1275b25d3ae1f46fdab064a4ac69886aaa25f437" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.447758 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zkxjk"] Feb 03 12:38:04 crc kubenswrapper[4679]: E0203 12:38:04.448522 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6011df-83a4-4d86-ac66-61b00cd615d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.448679 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6011df-83a4-4d86-ac66-61b00cd615d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.448943 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6011df-83a4-4d86-ac66-61b00cd615d4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.449669 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.451742 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.452415 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.453077 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.454557 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.469790 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zkxjk"] Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.508075 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwlb\" (UniqueName: \"kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.508262 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.508310 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.611644 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwlb\" (UniqueName: \"kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.611828 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.611879 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.615790 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.622095 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.628956 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwlb\" (UniqueName: \"kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb\") pod \"ssh-known-hosts-edpm-deployment-zkxjk\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:04 crc kubenswrapper[4679]: I0203 12:38:04.770134 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:05 crc kubenswrapper[4679]: I0203 12:38:05.300030 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zkxjk"] Feb 03 12:38:05 crc kubenswrapper[4679]: I0203 12:38:05.335973 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" event={"ID":"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b","Type":"ContainerStarted","Data":"8939f6719c4c923e61c247e72cd4b415165b8010d08e4c6b4dff29a4b7c4631d"} Feb 03 12:38:08 crc kubenswrapper[4679]: I0203 12:38:08.363809 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" event={"ID":"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b","Type":"ContainerStarted","Data":"c1369da3fe1b401e35350cd36111306ddf7d2367e8d85e793fea3636b66732f2"} Feb 03 12:38:08 crc kubenswrapper[4679]: I0203 12:38:08.384289 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" podStartSLOduration=2.284981009 podStartE2EDuration="4.384272496s" podCreationTimestamp="2026-02-03 12:38:04 +0000 UTC" firstStartedPulling="2026-02-03 12:38:05.307617165 +0000 UTC m=+1957.782513253" lastFinishedPulling="2026-02-03 12:38:07.406908642 +0000 UTC m=+1959.881804740" observedRunningTime="2026-02-03 12:38:08.376134749 +0000 UTC m=+1960.851030857" watchObservedRunningTime="2026-02-03 12:38:08.384272496 +0000 UTC m=+1960.859168584" Feb 03 12:38:09 crc kubenswrapper[4679]: I0203 12:38:09.212320 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:38:10 crc kubenswrapper[4679]: I0203 12:38:10.382100 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb"} Feb 03 12:38:15 crc kubenswrapper[4679]: I0203 12:38:15.426002 4679 generic.go:334] "Generic (PLEG): container finished" podID="cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" containerID="c1369da3fe1b401e35350cd36111306ddf7d2367e8d85e793fea3636b66732f2" exitCode=0 Feb 03 12:38:15 crc kubenswrapper[4679]: I0203 12:38:15.426076 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" event={"ID":"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b","Type":"ContainerDied","Data":"c1369da3fe1b401e35350cd36111306ddf7d2367e8d85e793fea3636b66732f2"} Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.825084 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.954053 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjwlb\" (UniqueName: \"kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb\") pod \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.954384 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam\") pod \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.954791 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0\") pod \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\" (UID: \"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b\") " Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.960313 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb" (OuterVolumeSpecName: "kube-api-access-vjwlb") pod "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" (UID: "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b"). InnerVolumeSpecName "kube-api-access-vjwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.983985 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" (UID: "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:16 crc kubenswrapper[4679]: I0203 12:38:16.984720 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" (UID: "cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.056901 4679 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.056943 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjwlb\" (UniqueName: \"kubernetes.io/projected/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-kube-api-access-vjwlb\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.056955 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.444837 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" event={"ID":"cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b","Type":"ContainerDied","Data":"8939f6719c4c923e61c247e72cd4b415165b8010d08e4c6b4dff29a4b7c4631d"} Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.444888 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8939f6719c4c923e61c247e72cd4b415165b8010d08e4c6b4dff29a4b7c4631d" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.444965 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zkxjk" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.526695 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z"] Feb 03 12:38:17 crc kubenswrapper[4679]: E0203 12:38:17.527210 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.527229 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.527490 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b" containerName="ssh-known-hosts-edpm-deployment" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.528206 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.530543 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.533092 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.533298 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.533416 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.537548 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z"] Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.669245 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb26l\" (UniqueName: \"kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.669950 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.670044 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.804604 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.804772 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb26l\" (UniqueName: \"kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.804818 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.810472 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.810816 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.820142 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb26l\" (UniqueName: \"kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lkc9z\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:17 crc kubenswrapper[4679]: I0203 12:38:17.848991 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:18 crc kubenswrapper[4679]: I0203 12:38:18.355873 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z"] Feb 03 12:38:18 crc kubenswrapper[4679]: I0203 12:38:18.456917 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" event={"ID":"e11ad6fe-5a94-4797-a827-ca1918e67f79","Type":"ContainerStarted","Data":"a0884b8a36bb8ef60a87245e1a3f0fbe191c853771f0c277fc98abfceda90be4"} Feb 03 12:38:19 crc kubenswrapper[4679]: I0203 12:38:19.467954 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" event={"ID":"e11ad6fe-5a94-4797-a827-ca1918e67f79","Type":"ContainerStarted","Data":"1abae2740d66495f32fb3e531fb83fd9f417bd7ccf1f5d081e309954d1677f59"} Feb 03 12:38:19 crc kubenswrapper[4679]: I0203 12:38:19.495305 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" podStartSLOduration=2.111196893 podStartE2EDuration="2.495286202s" podCreationTimestamp="2026-02-03 12:38:17 +0000 UTC" firstStartedPulling="2026-02-03 12:38:18.367637972 +0000 UTC m=+1970.842534060" lastFinishedPulling="2026-02-03 12:38:18.751727281 +0000 UTC m=+1971.226623369" observedRunningTime="2026-02-03 12:38:19.493127717 +0000 UTC m=+1971.968023825" watchObservedRunningTime="2026-02-03 12:38:19.495286202 +0000 UTC m=+1971.970182290" Feb 03 12:38:27 crc kubenswrapper[4679]: I0203 12:38:27.532053 4679 generic.go:334] "Generic (PLEG): container finished" podID="e11ad6fe-5a94-4797-a827-ca1918e67f79" containerID="1abae2740d66495f32fb3e531fb83fd9f417bd7ccf1f5d081e309954d1677f59" exitCode=0 Feb 03 12:38:27 crc kubenswrapper[4679]: I0203 12:38:27.532172 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" event={"ID":"e11ad6fe-5a94-4797-a827-ca1918e67f79","Type":"ContainerDied","Data":"1abae2740d66495f32fb3e531fb83fd9f417bd7ccf1f5d081e309954d1677f59"} Feb 03 12:38:28 crc kubenswrapper[4679]: I0203 12:38:28.918625 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.027075 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam\") pod \"e11ad6fe-5a94-4797-a827-ca1918e67f79\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.027390 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb26l\" (UniqueName: \"kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l\") pod \"e11ad6fe-5a94-4797-a827-ca1918e67f79\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.027649 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory\") pod \"e11ad6fe-5a94-4797-a827-ca1918e67f79\" (UID: \"e11ad6fe-5a94-4797-a827-ca1918e67f79\") " Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.034540 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l" (OuterVolumeSpecName: "kube-api-access-zb26l") pod "e11ad6fe-5a94-4797-a827-ca1918e67f79" (UID: "e11ad6fe-5a94-4797-a827-ca1918e67f79"). InnerVolumeSpecName "kube-api-access-zb26l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.059979 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e11ad6fe-5a94-4797-a827-ca1918e67f79" (UID: "e11ad6fe-5a94-4797-a827-ca1918e67f79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.060010 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory" (OuterVolumeSpecName: "inventory") pod "e11ad6fe-5a94-4797-a827-ca1918e67f79" (UID: "e11ad6fe-5a94-4797-a827-ca1918e67f79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.129536 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.129584 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e11ad6fe-5a94-4797-a827-ca1918e67f79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.129603 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb26l\" (UniqueName: \"kubernetes.io/projected/e11ad6fe-5a94-4797-a827-ca1918e67f79-kube-api-access-zb26l\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.549967 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" event={"ID":"e11ad6fe-5a94-4797-a827-ca1918e67f79","Type":"ContainerDied","Data":"a0884b8a36bb8ef60a87245e1a3f0fbe191c853771f0c277fc98abfceda90be4"} Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.550025 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0884b8a36bb8ef60a87245e1a3f0fbe191c853771f0c277fc98abfceda90be4" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.550057 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lkc9z" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.617192 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs"] Feb 03 12:38:29 crc kubenswrapper[4679]: E0203 12:38:29.617696 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11ad6fe-5a94-4797-a827-ca1918e67f79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.617715 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11ad6fe-5a94-4797-a827-ca1918e67f79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.618231 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11ad6fe-5a94-4797-a827-ca1918e67f79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.619104 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.621501 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.622132 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.622132 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.622503 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.628044 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs"] Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.637879 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlmw\" (UniqueName: \"kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.637958 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.638057 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.739405 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.739527 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnlmw\" (UniqueName: \"kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.739565 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.743899 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.744903 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.757503 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnlmw\" (UniqueName: \"kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:29 crc kubenswrapper[4679]: I0203 12:38:29.942212 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:30 crc kubenswrapper[4679]: I0203 12:38:30.467787 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs"] Feb 03 12:38:30 crc kubenswrapper[4679]: I0203 12:38:30.560413 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" event={"ID":"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61","Type":"ContainerStarted","Data":"06d66181210811dda23e132f069437fa49f5b3eebf63177dd85e98a12f9e6f92"} Feb 03 12:38:31 crc kubenswrapper[4679]: I0203 12:38:31.572755 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" event={"ID":"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61","Type":"ContainerStarted","Data":"d51ccbce66c2f76174d999f5ec336efd0a16ed91720a2d4781defa1d30d8c1d7"} Feb 03 12:38:31 crc kubenswrapper[4679]: I0203 12:38:31.598861 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" podStartSLOduration=2.179660816 podStartE2EDuration="2.598832638s" podCreationTimestamp="2026-02-03 12:38:29 +0000 UTC" firstStartedPulling="2026-02-03 12:38:30.473113958 +0000 UTC m=+1982.948010046" lastFinishedPulling="2026-02-03 12:38:30.89228578 +0000 UTC m=+1983.367181868" observedRunningTime="2026-02-03 12:38:31.590810124 +0000 UTC m=+1984.065706202" watchObservedRunningTime="2026-02-03 12:38:31.598832638 +0000 UTC m=+1984.073728726" Feb 03 12:38:35 crc kubenswrapper[4679]: I0203 12:38:35.149102 4679 scope.go:117] "RemoveContainer" containerID="af4e745a5c5fe24439f5af568f671094ec925c4987c5c373c76e844c0e8c5bb8" Feb 03 12:38:40 crc kubenswrapper[4679]: I0203 12:38:40.659526 4679 generic.go:334] "Generic (PLEG): container finished" podID="e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" containerID="d51ccbce66c2f76174d999f5ec336efd0a16ed91720a2d4781defa1d30d8c1d7" exitCode=0 Feb 03 12:38:40 crc kubenswrapper[4679]: I0203 12:38:40.659625 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" event={"ID":"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61","Type":"ContainerDied","Data":"d51ccbce66c2f76174d999f5ec336efd0a16ed91720a2d4781defa1d30d8c1d7"} Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.068694 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.181100 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam\") pod \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.181347 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnlmw\" (UniqueName: \"kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw\") pod \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.181919 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory\") pod \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\" (UID: \"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61\") " Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.188420 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw" (OuterVolumeSpecName: "kube-api-access-lnlmw") pod "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" (UID: "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61"). InnerVolumeSpecName "kube-api-access-lnlmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.219261 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" (UID: "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.220255 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory" (OuterVolumeSpecName: "inventory") pod "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" (UID: "e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.289946 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.289986 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnlmw\" (UniqueName: \"kubernetes.io/projected/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-kube-api-access-lnlmw\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.289996 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.677490 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.678884 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs" event={"ID":"e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61","Type":"ContainerDied","Data":"06d66181210811dda23e132f069437fa49f5b3eebf63177dd85e98a12f9e6f92"} Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.679293 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d66181210811dda23e132f069437fa49f5b3eebf63177dd85e98a12f9e6f92" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.771319 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh"] Feb 03 12:38:42 crc kubenswrapper[4679]: E0203 12:38:42.771711 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.771731 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.771931 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.772573 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.776418 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.776455 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.776850 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.777084 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.777258 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.777453 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.777562 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.777797 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.799675 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh"] Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901284 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901352 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901394 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901448 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901467 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901487 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901508 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901533 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901552 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901573 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901589 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901609 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901643 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbp4\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:42 crc kubenswrapper[4679]: I0203 12:38:42.901668 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.003981 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004075 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004122 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004205 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004282 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004327 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004434 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004485 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004529 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004601 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbp4\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004649 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004756 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004818 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.004866 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.010019 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.010256 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.011287 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.011342 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.011482 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.011535 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.013352 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.013622 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.013930 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.014258 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.016178 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.016201 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.017495 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.028215 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbp4\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.094619 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.673694 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh"] Feb 03 12:38:43 crc kubenswrapper[4679]: W0203 12:38:43.675925 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf811a1c_76b7_4b43_b658_d68388d38cb8.slice/crio-3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af WatchSource:0}: Error finding container 3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af: Status 404 returned error can't find the container with id 3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af Feb 03 12:38:43 crc kubenswrapper[4679]: I0203 12:38:43.686694 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" event={"ID":"bf811a1c-76b7-4b43-b658-d68388d38cb8","Type":"ContainerStarted","Data":"3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af"} Feb 03 12:38:44 crc kubenswrapper[4679]: I0203 12:38:44.703199 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" event={"ID":"bf811a1c-76b7-4b43-b658-d68388d38cb8","Type":"ContainerStarted","Data":"24260b9c988635bae248b44ad2e726703e2e1c3354e1232a75fda60684184f66"} Feb 03 12:38:44 crc kubenswrapper[4679]: I0203 12:38:44.728459 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" podStartSLOduration=2.089573483 podStartE2EDuration="2.728435669s" podCreationTimestamp="2026-02-03 12:38:42 +0000 UTC" firstStartedPulling="2026-02-03 12:38:43.678820675 +0000 UTC m=+1996.153716763" lastFinishedPulling="2026-02-03 12:38:44.317682861 +0000 UTC m=+1996.792578949" observedRunningTime="2026-02-03 12:38:44.719186824 +0000 UTC m=+1997.194082912" watchObservedRunningTime="2026-02-03 12:38:44.728435669 +0000 UTC m=+1997.203331757" Feb 03 12:39:22 crc kubenswrapper[4679]: I0203 12:39:22.052237 4679 generic.go:334] "Generic (PLEG): container finished" podID="bf811a1c-76b7-4b43-b658-d68388d38cb8" containerID="24260b9c988635bae248b44ad2e726703e2e1c3354e1232a75fda60684184f66" exitCode=0 Feb 03 12:39:22 crc kubenswrapper[4679]: I0203 12:39:22.052310 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" event={"ID":"bf811a1c-76b7-4b43-b658-d68388d38cb8","Type":"ContainerDied","Data":"24260b9c988635bae248b44ad2e726703e2e1c3354e1232a75fda60684184f66"} Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.484910 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.519022 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.519133 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.519227 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdbp4\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.519277 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520533 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520580 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520631 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520660 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520701 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520735 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520780 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520849 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.520900 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle\") pod \"bf811a1c-76b7-4b43-b658-d68388d38cb8\" (UID: \"bf811a1c-76b7-4b43-b658-d68388d38cb8\") " Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.531017 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4" (OuterVolumeSpecName: "kube-api-access-pdbp4") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "kube-api-access-pdbp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.537347 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.547245 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.574617 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.576587 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.578663 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.579877 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.579971 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.580448 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581408 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdbp4\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-kube-api-access-pdbp4\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581432 4679 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581456 4679 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581475 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581488 4679 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581501 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581518 4679 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581531 4679 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.581544 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.591840 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.623059 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory" (OuterVolumeSpecName: "inventory") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.625953 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.638774 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.654619 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf811a1c-76b7-4b43-b658-d68388d38cb8" (UID: "bf811a1c-76b7-4b43-b658-d68388d38cb8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.694843 4679 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.694888 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.694900 4679 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.694947 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/bf811a1c-76b7-4b43-b658-d68388d38cb8-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:23 crc kubenswrapper[4679]: I0203 12:39:23.694966 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf811a1c-76b7-4b43-b658-d68388d38cb8-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.075747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" event={"ID":"bf811a1c-76b7-4b43-b658-d68388d38cb8","Type":"ContainerDied","Data":"3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af"} Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.075801 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ced8fb1496af27eca5a9356240b56234efc25b6f35e6c17048f806b2d38d9af" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.075868 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.198040 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt"] Feb 03 12:39:24 crc kubenswrapper[4679]: E0203 12:39:24.198849 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf811a1c-76b7-4b43-b658-d68388d38cb8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.198894 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf811a1c-76b7-4b43-b658-d68388d38cb8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.199172 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf811a1c-76b7-4b43-b658-d68388d38cb8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.200112 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.202389 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.202669 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.202803 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.204004 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.204241 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.210239 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt"] Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.305122 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.305276 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.305343 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.305458 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.305488 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhcp\" (UniqueName: \"kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.407136 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.408389 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhcp\" (UniqueName: \"kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.408632 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.408823 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.408943 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.409667 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.412982 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.413270 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.414681 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.427145 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhcp\" (UniqueName: \"kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mfmjt\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:24 crc kubenswrapper[4679]: I0203 12:39:24.565020 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:39:25 crc kubenswrapper[4679]: I0203 12:39:25.069292 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt"] Feb 03 12:39:25 crc kubenswrapper[4679]: I0203 12:39:25.084803 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" event={"ID":"cc05f31e-be8f-497a-ba7b-1f5c54d070c4","Type":"ContainerStarted","Data":"6fd63a03cc6f79f5e479a71afd07d2f1c813d4d06efc9870d4f9418a299aee14"} Feb 03 12:39:27 crc kubenswrapper[4679]: I0203 12:39:27.132012 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" event={"ID":"cc05f31e-be8f-497a-ba7b-1f5c54d070c4","Type":"ContainerStarted","Data":"792c7530008efbe9a0e9201fcbaa53a677062da67f6954fb82de1cc0a7e20f88"} Feb 03 12:39:27 crc kubenswrapper[4679]: I0203 12:39:27.160661 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" podStartSLOduration=1.906432402 podStartE2EDuration="3.160631603s" podCreationTimestamp="2026-02-03 12:39:24 +0000 UTC" firstStartedPulling="2026-02-03 12:39:25.076874971 +0000 UTC m=+2037.551771059" lastFinishedPulling="2026-02-03 12:39:26.331074172 +0000 UTC m=+2038.805970260" observedRunningTime="2026-02-03 12:39:27.152137886 +0000 UTC m=+2039.627033994" watchObservedRunningTime="2026-02-03 12:39:27.160631603 +0000 UTC m=+2039.635527691" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.126577 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.129197 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.150075 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.182520 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.182675 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.182736 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psrl\" (UniqueName: \"kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.285705 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.285808 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psrl\" (UniqueName: \"kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.286141 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.286256 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.286584 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.309328 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psrl\" (UniqueName: \"kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl\") pod \"redhat-operators-wph2b\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.449398 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:22 crc kubenswrapper[4679]: I0203 12:40:22.952878 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:23 crc kubenswrapper[4679]: I0203 12:40:23.680248 4679 generic.go:334] "Generic (PLEG): container finished" podID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerID="da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15" exitCode=0 Feb 03 12:40:23 crc kubenswrapper[4679]: I0203 12:40:23.680569 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerDied","Data":"da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15"} Feb 03 12:40:23 crc kubenswrapper[4679]: I0203 12:40:23.680644 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerStarted","Data":"cc87498343b7457d33b83b468b04a41f1343441f22ec6bc444e36151cb077bf5"} Feb 03 12:40:23 crc kubenswrapper[4679]: I0203 12:40:23.683755 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:40:25 crc kubenswrapper[4679]: I0203 12:40:25.702304 4679 generic.go:334] "Generic (PLEG): container finished" podID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerID="50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68" exitCode=0 Feb 03 12:40:25 crc kubenswrapper[4679]: I0203 12:40:25.702379 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerDied","Data":"50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68"} Feb 03 12:40:27 crc kubenswrapper[4679]: I0203 12:40:27.723047 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerStarted","Data":"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822"} Feb 03 12:40:27 crc kubenswrapper[4679]: I0203 12:40:27.753609 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wph2b" podStartSLOduration=2.624478039 podStartE2EDuration="5.753590157s" podCreationTimestamp="2026-02-03 12:40:22 +0000 UTC" firstStartedPulling="2026-02-03 12:40:23.683482411 +0000 UTC m=+2096.158378499" lastFinishedPulling="2026-02-03 12:40:26.812594529 +0000 UTC m=+2099.287490617" observedRunningTime="2026-02-03 12:40:27.752660563 +0000 UTC m=+2100.227556651" watchObservedRunningTime="2026-02-03 12:40:27.753590157 +0000 UTC m=+2100.228486235" Feb 03 12:40:28 crc kubenswrapper[4679]: I0203 12:40:28.734118 4679 generic.go:334] "Generic (PLEG): container finished" podID="cc05f31e-be8f-497a-ba7b-1f5c54d070c4" containerID="792c7530008efbe9a0e9201fcbaa53a677062da67f6954fb82de1cc0a7e20f88" exitCode=0 Feb 03 12:40:28 crc kubenswrapper[4679]: I0203 12:40:28.734216 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" event={"ID":"cc05f31e-be8f-497a-ba7b-1f5c54d070c4","Type":"ContainerDied","Data":"792c7530008efbe9a0e9201fcbaa53a677062da67f6954fb82de1cc0a7e20f88"} Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.173343 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.247983 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srhcp\" (UniqueName: \"kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp\") pod \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.248220 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0\") pod \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.248275 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle\") pod \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.248340 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam\") pod \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.248440 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory\") pod \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\" (UID: \"cc05f31e-be8f-497a-ba7b-1f5c54d070c4\") " Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.253720 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cc05f31e-be8f-497a-ba7b-1f5c54d070c4" (UID: "cc05f31e-be8f-497a-ba7b-1f5c54d070c4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.259013 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp" (OuterVolumeSpecName: "kube-api-access-srhcp") pod "cc05f31e-be8f-497a-ba7b-1f5c54d070c4" (UID: "cc05f31e-be8f-497a-ba7b-1f5c54d070c4"). InnerVolumeSpecName "kube-api-access-srhcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.276461 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cc05f31e-be8f-497a-ba7b-1f5c54d070c4" (UID: "cc05f31e-be8f-497a-ba7b-1f5c54d070c4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.285182 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory" (OuterVolumeSpecName: "inventory") pod "cc05f31e-be8f-497a-ba7b-1f5c54d070c4" (UID: "cc05f31e-be8f-497a-ba7b-1f5c54d070c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.285229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc05f31e-be8f-497a-ba7b-1f5c54d070c4" (UID: "cc05f31e-be8f-497a-ba7b-1f5c54d070c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.351107 4679 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.351146 4679 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.351157 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.351168 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.351179 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srhcp\" (UniqueName: \"kubernetes.io/projected/cc05f31e-be8f-497a-ba7b-1f5c54d070c4-kube-api-access-srhcp\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.754013 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" event={"ID":"cc05f31e-be8f-497a-ba7b-1f5c54d070c4","Type":"ContainerDied","Data":"6fd63a03cc6f79f5e479a71afd07d2f1c813d4d06efc9870d4f9418a299aee14"} Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.754058 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mfmjt" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.754064 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd63a03cc6f79f5e479a71afd07d2f1c813d4d06efc9870d4f9418a299aee14" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.848149 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2"] Feb 03 12:40:30 crc kubenswrapper[4679]: E0203 12:40:30.848694 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc05f31e-be8f-497a-ba7b-1f5c54d070c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.848721 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc05f31e-be8f-497a-ba7b-1f5c54d070c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.848961 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc05f31e-be8f-497a-ba7b-1f5c54d070c4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.849772 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.852677 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.852776 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.856065 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.856079 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.856635 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.857498 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2"] Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.858989 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.962989 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.963345 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.963567 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfkn\" (UniqueName: \"kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.963620 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.963785 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:30 crc kubenswrapper[4679]: I0203 12:40:30.963861 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.065896 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.066013 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.066071 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwfkn\" (UniqueName: \"kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.066104 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.066148 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.066200 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.070425 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.070740 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.071538 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.079937 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.082010 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.086180 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwfkn\" (UniqueName: \"kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.168450 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.717017 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2"] Feb 03 12:40:31 crc kubenswrapper[4679]: W0203 12:40:31.721477 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc385278_b837_4f12_bd6f_5fdd89b02bd7.slice/crio-94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2 WatchSource:0}: Error finding container 94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2: Status 404 returned error can't find the container with id 94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2 Feb 03 12:40:31 crc kubenswrapper[4679]: I0203 12:40:31.787340 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" event={"ID":"cc385278-b837-4f12-bd6f-5fdd89b02bd7","Type":"ContainerStarted","Data":"94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2"} Feb 03 12:40:32 crc kubenswrapper[4679]: I0203 12:40:32.450604 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:32 crc kubenswrapper[4679]: I0203 12:40:32.450658 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:33 crc kubenswrapper[4679]: I0203 12:40:33.500857 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wph2b" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="registry-server" probeResult="failure" output=< Feb 03 12:40:33 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:40:33 crc kubenswrapper[4679]: > Feb 03 12:40:34 crc kubenswrapper[4679]: I0203 12:40:34.819235 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" event={"ID":"cc385278-b837-4f12-bd6f-5fdd89b02bd7","Type":"ContainerStarted","Data":"897abe64c5cadd2f08af4a6b18dfb903999b6bbd0863c07ad9655c655ab09f64"} Feb 03 12:40:34 crc kubenswrapper[4679]: I0203 12:40:34.845521 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" podStartSLOduration=2.408626064 podStartE2EDuration="4.845493837s" podCreationTimestamp="2026-02-03 12:40:30 +0000 UTC" firstStartedPulling="2026-02-03 12:40:31.724180148 +0000 UTC m=+2104.199076236" lastFinishedPulling="2026-02-03 12:40:34.161047921 +0000 UTC m=+2106.635944009" observedRunningTime="2026-02-03 12:40:34.835115083 +0000 UTC m=+2107.310011171" watchObservedRunningTime="2026-02-03 12:40:34.845493837 +0000 UTC m=+2107.320389925" Feb 03 12:40:36 crc kubenswrapper[4679]: I0203 12:40:36.735675 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:40:36 crc kubenswrapper[4679]: I0203 12:40:36.735774 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:40:42 crc kubenswrapper[4679]: I0203 12:40:42.498044 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:42 crc kubenswrapper[4679]: I0203 12:40:42.549139 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:42 crc kubenswrapper[4679]: I0203 12:40:42.741855 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:43 crc kubenswrapper[4679]: I0203 12:40:43.901559 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wph2b" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="registry-server" containerID="cri-o://6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822" gracePeriod=2 Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.380662 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.568469 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6psrl\" (UniqueName: \"kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl\") pod \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.568541 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities\") pod \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.568728 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content\") pod \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\" (UID: \"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1\") " Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.569752 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities" (OuterVolumeSpecName: "utilities") pod "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" (UID: "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.574771 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl" (OuterVolumeSpecName: "kube-api-access-6psrl") pod "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" (UID: "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1"). InnerVolumeSpecName "kube-api-access-6psrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.670484 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6psrl\" (UniqueName: \"kubernetes.io/projected/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-kube-api-access-6psrl\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.670515 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.713840 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" (UID: "de0afec0-2aba-4e29-b3e3-2c8f0e355ec1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.772502 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.926164 4679 generic.go:334] "Generic (PLEG): container finished" podID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerID="6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822" exitCode=0 Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.926213 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerDied","Data":"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822"} Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.926229 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wph2b" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.926260 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wph2b" event={"ID":"de0afec0-2aba-4e29-b3e3-2c8f0e355ec1","Type":"ContainerDied","Data":"cc87498343b7457d33b83b468b04a41f1343441f22ec6bc444e36151cb077bf5"} Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.926282 4679 scope.go:117] "RemoveContainer" containerID="6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.946150 4679 scope.go:117] "RemoveContainer" containerID="50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68" Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.960164 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.970601 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wph2b"] Feb 03 12:40:44 crc kubenswrapper[4679]: I0203 12:40:44.988342 4679 scope.go:117] "RemoveContainer" containerID="da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.017109 4679 scope.go:117] "RemoveContainer" containerID="6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822" Feb 03 12:40:45 crc kubenswrapper[4679]: E0203 12:40:45.017645 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822\": container with ID starting with 6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822 not found: ID does not exist" containerID="6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.017681 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822"} err="failed to get container status \"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822\": rpc error: code = NotFound desc = could not find container \"6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822\": container with ID starting with 6df281e6b31a68d79dc19903e7065899350fdf7387e1d9ee5e93b2e8173fc822 not found: ID does not exist" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.017711 4679 scope.go:117] "RemoveContainer" containerID="50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68" Feb 03 12:40:45 crc kubenswrapper[4679]: E0203 12:40:45.018181 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68\": container with ID starting with 50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68 not found: ID does not exist" containerID="50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.018239 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68"} err="failed to get container status \"50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68\": rpc error: code = NotFound desc = could not find container \"50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68\": container with ID starting with 50f75655b525c0ad4f39059d2b4c201686025f6f9c08ee19480cdb5c797f2e68 not found: ID does not exist" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.018273 4679 scope.go:117] "RemoveContainer" containerID="da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15" Feb 03 12:40:45 crc kubenswrapper[4679]: E0203 12:40:45.018619 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15\": container with ID starting with da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15 not found: ID does not exist" containerID="da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15" Feb 03 12:40:45 crc kubenswrapper[4679]: I0203 12:40:45.018650 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15"} err="failed to get container status \"da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15\": rpc error: code = NotFound desc = could not find container \"da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15\": container with ID starting with da08a597f7bfd046917ba984220fd738f0287b304a9568db16ae2d4ed337ec15 not found: ID does not exist" Feb 03 12:40:46 crc kubenswrapper[4679]: I0203 12:40:46.222832 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" path="/var/lib/kubelet/pods/de0afec0-2aba-4e29-b3e3-2c8f0e355ec1/volumes" Feb 03 12:41:06 crc kubenswrapper[4679]: I0203 12:41:06.736021 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:41:06 crc kubenswrapper[4679]: I0203 12:41:06.736731 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:41:19 crc kubenswrapper[4679]: I0203 12:41:19.255462 4679 generic.go:334] "Generic (PLEG): container finished" podID="cc385278-b837-4f12-bd6f-5fdd89b02bd7" containerID="897abe64c5cadd2f08af4a6b18dfb903999b6bbd0863c07ad9655c655ab09f64" exitCode=0 Feb 03 12:41:19 crc kubenswrapper[4679]: I0203 12:41:19.255640 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" event={"ID":"cc385278-b837-4f12-bd6f-5fdd89b02bd7","Type":"ContainerDied","Data":"897abe64c5cadd2f08af4a6b18dfb903999b6bbd0863c07ad9655c655ab09f64"} Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.706126 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.872381 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.872602 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.872714 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.872855 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.872954 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwfkn\" (UniqueName: \"kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.873055 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory\") pod \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\" (UID: \"cc385278-b837-4f12-bd6f-5fdd89b02bd7\") " Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.879156 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn" (OuterVolumeSpecName: "kube-api-access-fwfkn") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "kube-api-access-fwfkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.889598 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.903276 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.903637 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory" (OuterVolumeSpecName: "inventory") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.909846 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.911283 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cc385278-b837-4f12-bd6f-5fdd89b02bd7" (UID: "cc385278-b837-4f12-bd6f-5fdd89b02bd7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975519 4679 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975559 4679 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975574 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975590 4679 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975605 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwfkn\" (UniqueName: \"kubernetes.io/projected/cc385278-b837-4f12-bd6f-5fdd89b02bd7-kube-api-access-fwfkn\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:20 crc kubenswrapper[4679]: I0203 12:41:20.975616 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc385278-b837-4f12-bd6f-5fdd89b02bd7-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.270580 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" event={"ID":"cc385278-b837-4f12-bd6f-5fdd89b02bd7","Type":"ContainerDied","Data":"94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2"} Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.270623 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c067992095c0556c6690d30c09e24204b50ff82d73858bc8d8d691aee1ccd2" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.270627 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.369884 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5"] Feb 03 12:41:21 crc kubenswrapper[4679]: E0203 12:41:21.370339 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc385278-b837-4f12-bd6f-5fdd89b02bd7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370371 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc385278-b837-4f12-bd6f-5fdd89b02bd7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:41:21 crc kubenswrapper[4679]: E0203 12:41:21.370398 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="extract-content" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370405 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="extract-content" Feb 03 12:41:21 crc kubenswrapper[4679]: E0203 12:41:21.370420 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="extract-utilities" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370428 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="extract-utilities" Feb 03 12:41:21 crc kubenswrapper[4679]: E0203 12:41:21.370447 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="registry-server" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370456 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="registry-server" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370628 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc385278-b837-4f12-bd6f-5fdd89b02bd7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.370647 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0afec0-2aba-4e29-b3e3-2c8f0e355ec1" containerName="registry-server" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.371241 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.373159 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.373200 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.373206 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.373612 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.374504 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.387307 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5"] Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.389272 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.389374 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfbr\" (UniqueName: \"kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.389411 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.389742 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.389798 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.491427 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.491717 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxfbr\" (UniqueName: \"kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.491803 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.491962 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.492048 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.495828 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.496122 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.499741 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.504640 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.509384 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxfbr\" (UniqueName: \"kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:21 crc kubenswrapper[4679]: I0203 12:41:21.690479 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:41:22 crc kubenswrapper[4679]: I0203 12:41:22.278827 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5"] Feb 03 12:41:23 crc kubenswrapper[4679]: I0203 12:41:23.290637 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" event={"ID":"67eba320-30c8-4f6e-9958-f58ee00e9bdc","Type":"ContainerStarted","Data":"e0ed9a9c9b75fe2d19dd5181e2ec0f3aad6b9eed9bdaeb191b581741a9bfa446"} Feb 03 12:41:23 crc kubenswrapper[4679]: I0203 12:41:23.291244 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" event={"ID":"67eba320-30c8-4f6e-9958-f58ee00e9bdc","Type":"ContainerStarted","Data":"3e63135fa27c8b87e0b12528433e7f466ad875870ecfeb5d920534e4ebd47f36"} Feb 03 12:41:23 crc kubenswrapper[4679]: I0203 12:41:23.312865 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" podStartSLOduration=1.878735933 podStartE2EDuration="2.312844985s" podCreationTimestamp="2026-02-03 12:41:21 +0000 UTC" firstStartedPulling="2026-02-03 12:41:22.288692842 +0000 UTC m=+2154.763588930" lastFinishedPulling="2026-02-03 12:41:22.722801894 +0000 UTC m=+2155.197697982" observedRunningTime="2026-02-03 12:41:23.309736316 +0000 UTC m=+2155.784632404" watchObservedRunningTime="2026-02-03 12:41:23.312844985 +0000 UTC m=+2155.787741073" Feb 03 12:41:36 crc kubenswrapper[4679]: I0203 12:41:36.736508 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:41:36 crc kubenswrapper[4679]: I0203 12:41:36.737083 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:41:36 crc kubenswrapper[4679]: I0203 12:41:36.737132 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:41:36 crc kubenswrapper[4679]: I0203 12:41:36.737944 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:41:36 crc kubenswrapper[4679]: I0203 12:41:36.738001 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb" gracePeriod=600 Feb 03 12:41:37 crc kubenswrapper[4679]: I0203 12:41:37.413008 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb" exitCode=0 Feb 03 12:41:37 crc kubenswrapper[4679]: I0203 12:41:37.413496 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb"} Feb 03 12:41:37 crc kubenswrapper[4679]: I0203 12:41:37.413574 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620"} Feb 03 12:41:37 crc kubenswrapper[4679]: I0203 12:41:37.413594 4679 scope.go:117] "RemoveContainer" containerID="9badc72f300d2ca6a54daa6f2f2d51d3bbcb796775ad7f32958ff3524049341b" Feb 03 12:44:06 crc kubenswrapper[4679]: I0203 12:44:06.736706 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:44:06 crc kubenswrapper[4679]: I0203 12:44:06.737659 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:44:07 crc kubenswrapper[4679]: I0203 12:44:07.971750 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:07 crc kubenswrapper[4679]: I0203 12:44:07.974555 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.001320 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.073695 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhcr\" (UniqueName: \"kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.073774 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.074014 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.176250 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhcr\" (UniqueName: \"kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.176316 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.176856 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.177074 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.177408 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.198163 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhcr\" (UniqueName: \"kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr\") pod \"community-operators-wfzls\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.305016 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:08 crc kubenswrapper[4679]: I0203 12:44:08.861560 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:09 crc kubenswrapper[4679]: I0203 12:44:09.436845 4679 generic.go:334] "Generic (PLEG): container finished" podID="fbd46d18-b431-4ec7-b007-641aa420717e" containerID="3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37" exitCode=0 Feb 03 12:44:09 crc kubenswrapper[4679]: I0203 12:44:09.436947 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerDied","Data":"3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37"} Feb 03 12:44:09 crc kubenswrapper[4679]: I0203 12:44:09.437223 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerStarted","Data":"16bc4597fbeae7f898e9c4c6b68aa79c2af80e4e28e85d2071b1bad82e665b8a"} Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.369946 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.372437 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.383502 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.432908 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.433066 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc27z\" (UniqueName: \"kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.433166 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.452635 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerStarted","Data":"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60"} Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.534543 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc27z\" (UniqueName: \"kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.534686 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.534734 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.535285 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.535412 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.562505 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc27z\" (UniqueName: \"kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z\") pod \"redhat-marketplace-wbcw9\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:10 crc kubenswrapper[4679]: I0203 12:44:10.698755 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.163942 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.462766 4679 generic.go:334] "Generic (PLEG): container finished" podID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerID="119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da" exitCode=0 Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.462874 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerDied","Data":"119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da"} Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.462905 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerStarted","Data":"a6afd2a7b0c7b4e2086592d37a652976f31100531060974f1b7037642bd43ae3"} Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.465998 4679 generic.go:334] "Generic (PLEG): container finished" podID="fbd46d18-b431-4ec7-b007-641aa420717e" containerID="d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60" exitCode=0 Feb 03 12:44:11 crc kubenswrapper[4679]: I0203 12:44:11.466042 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerDied","Data":"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60"} Feb 03 12:44:12 crc kubenswrapper[4679]: I0203 12:44:12.476624 4679 generic.go:334] "Generic (PLEG): container finished" podID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerID="9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c" exitCode=0 Feb 03 12:44:12 crc kubenswrapper[4679]: I0203 12:44:12.476716 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerDied","Data":"9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c"} Feb 03 12:44:12 crc kubenswrapper[4679]: I0203 12:44:12.479351 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerStarted","Data":"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533"} Feb 03 12:44:12 crc kubenswrapper[4679]: I0203 12:44:12.534960 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfzls" podStartSLOduration=3.08315433 podStartE2EDuration="5.534937528s" podCreationTimestamp="2026-02-03 12:44:07 +0000 UTC" firstStartedPulling="2026-02-03 12:44:09.438666357 +0000 UTC m=+2321.913562445" lastFinishedPulling="2026-02-03 12:44:11.890449555 +0000 UTC m=+2324.365345643" observedRunningTime="2026-02-03 12:44:12.529126921 +0000 UTC m=+2325.004023019" watchObservedRunningTime="2026-02-03 12:44:12.534937528 +0000 UTC m=+2325.009833606" Feb 03 12:44:13 crc kubenswrapper[4679]: I0203 12:44:13.491872 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerStarted","Data":"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924"} Feb 03 12:44:13 crc kubenswrapper[4679]: I0203 12:44:13.524657 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wbcw9" podStartSLOduration=1.88271844 podStartE2EDuration="3.524625796s" podCreationTimestamp="2026-02-03 12:44:10 +0000 UTC" firstStartedPulling="2026-02-03 12:44:11.465538398 +0000 UTC m=+2323.940434486" lastFinishedPulling="2026-02-03 12:44:13.107445754 +0000 UTC m=+2325.582341842" observedRunningTime="2026-02-03 12:44:13.510400295 +0000 UTC m=+2325.985296393" watchObservedRunningTime="2026-02-03 12:44:13.524625796 +0000 UTC m=+2325.999521894" Feb 03 12:44:18 crc kubenswrapper[4679]: I0203 12:44:18.305984 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:18 crc kubenswrapper[4679]: I0203 12:44:18.306479 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:18 crc kubenswrapper[4679]: I0203 12:44:18.356871 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:18 crc kubenswrapper[4679]: I0203 12:44:18.588502 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:18 crc kubenswrapper[4679]: I0203 12:44:18.757057 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:20 crc kubenswrapper[4679]: I0203 12:44:20.552312 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wfzls" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="registry-server" containerID="cri-o://e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533" gracePeriod=2 Feb 03 12:44:20 crc kubenswrapper[4679]: I0203 12:44:20.699225 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:20 crc kubenswrapper[4679]: I0203 12:44:20.699640 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:20 crc kubenswrapper[4679]: I0203 12:44:20.768144 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.011480 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.098427 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content\") pod \"fbd46d18-b431-4ec7-b007-641aa420717e\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.098616 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhcr\" (UniqueName: \"kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr\") pod \"fbd46d18-b431-4ec7-b007-641aa420717e\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.098801 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities\") pod \"fbd46d18-b431-4ec7-b007-641aa420717e\" (UID: \"fbd46d18-b431-4ec7-b007-641aa420717e\") " Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.099866 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities" (OuterVolumeSpecName: "utilities") pod "fbd46d18-b431-4ec7-b007-641aa420717e" (UID: "fbd46d18-b431-4ec7-b007-641aa420717e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.105264 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr" (OuterVolumeSpecName: "kube-api-access-ldhcr") pod "fbd46d18-b431-4ec7-b007-641aa420717e" (UID: "fbd46d18-b431-4ec7-b007-641aa420717e"). InnerVolumeSpecName "kube-api-access-ldhcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.167562 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbd46d18-b431-4ec7-b007-641aa420717e" (UID: "fbd46d18-b431-4ec7-b007-641aa420717e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.201203 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhcr\" (UniqueName: \"kubernetes.io/projected/fbd46d18-b431-4ec7-b007-641aa420717e-kube-api-access-ldhcr\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.201255 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.201269 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd46d18-b431-4ec7-b007-641aa420717e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.568889 4679 generic.go:334] "Generic (PLEG): container finished" podID="fbd46d18-b431-4ec7-b007-641aa420717e" containerID="e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533" exitCode=0 Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.568958 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfzls" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.568974 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerDied","Data":"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533"} Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.569055 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfzls" event={"ID":"fbd46d18-b431-4ec7-b007-641aa420717e","Type":"ContainerDied","Data":"16bc4597fbeae7f898e9c4c6b68aa79c2af80e4e28e85d2071b1bad82e665b8a"} Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.569081 4679 scope.go:117] "RemoveContainer" containerID="e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.601457 4679 scope.go:117] "RemoveContainer" containerID="d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.607522 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.622402 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wfzls"] Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.626129 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.628659 4679 scope.go:117] "RemoveContainer" containerID="3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.684631 4679 scope.go:117] "RemoveContainer" containerID="e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533" Feb 03 12:44:21 crc kubenswrapper[4679]: E0203 12:44:21.685247 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533\": container with ID starting with e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533 not found: ID does not exist" containerID="e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.685288 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533"} err="failed to get container status \"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533\": rpc error: code = NotFound desc = could not find container \"e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533\": container with ID starting with e98965e08a906ad7da987de4a51b30c90a0f09b6c2b03e9e8d2cd2c6b19d0533 not found: ID does not exist" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.685319 4679 scope.go:117] "RemoveContainer" containerID="d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60" Feb 03 12:44:21 crc kubenswrapper[4679]: E0203 12:44:21.685671 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60\": container with ID starting with d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60 not found: ID does not exist" containerID="d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.685771 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60"} err="failed to get container status \"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60\": rpc error: code = NotFound desc = could not find container \"d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60\": container with ID starting with d043afbad6d35308c5020682a2717deb1d5f63d28724842b4cdcabf00ae9cb60 not found: ID does not exist" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.685850 4679 scope.go:117] "RemoveContainer" containerID="3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37" Feb 03 12:44:21 crc kubenswrapper[4679]: E0203 12:44:21.686590 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37\": container with ID starting with 3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37 not found: ID does not exist" containerID="3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37" Feb 03 12:44:21 crc kubenswrapper[4679]: I0203 12:44:21.686617 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37"} err="failed to get container status \"3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37\": rpc error: code = NotFound desc = could not find container \"3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37\": container with ID starting with 3aa025f4ab0053b0014f5f6d71bc6ef8f9cbca5f8b289679255674b8aa5b2b37 not found: ID does not exist" Feb 03 12:44:22 crc kubenswrapper[4679]: I0203 12:44:22.234240 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" path="/var/lib/kubelet/pods/fbd46d18-b431-4ec7-b007-641aa420717e/volumes" Feb 03 12:44:23 crc kubenswrapper[4679]: I0203 12:44:23.956233 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:23 crc kubenswrapper[4679]: I0203 12:44:23.956817 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wbcw9" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="registry-server" containerID="cri-o://64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924" gracePeriod=2 Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.518206 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.578813 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc27z\" (UniqueName: \"kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z\") pod \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.578913 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content\") pod \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.579122 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities\") pod \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\" (UID: \"8469ab87-8e4a-4347-abbb-c8cc3500fde6\") " Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.580142 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities" (OuterVolumeSpecName: "utilities") pod "8469ab87-8e4a-4347-abbb-c8cc3500fde6" (UID: "8469ab87-8e4a-4347-abbb-c8cc3500fde6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.585412 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z" (OuterVolumeSpecName: "kube-api-access-pc27z") pod "8469ab87-8e4a-4347-abbb-c8cc3500fde6" (UID: "8469ab87-8e4a-4347-abbb-c8cc3500fde6"). InnerVolumeSpecName "kube-api-access-pc27z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.599505 4679 generic.go:334] "Generic (PLEG): container finished" podID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerID="64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924" exitCode=0 Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.599557 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerDied","Data":"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924"} Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.599573 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbcw9" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.599610 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbcw9" event={"ID":"8469ab87-8e4a-4347-abbb-c8cc3500fde6","Type":"ContainerDied","Data":"a6afd2a7b0c7b4e2086592d37a652976f31100531060974f1b7037642bd43ae3"} Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.599633 4679 scope.go:117] "RemoveContainer" containerID="64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.605527 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8469ab87-8e4a-4347-abbb-c8cc3500fde6" (UID: "8469ab87-8e4a-4347-abbb-c8cc3500fde6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.651555 4679 scope.go:117] "RemoveContainer" containerID="9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.675868 4679 scope.go:117] "RemoveContainer" containerID="119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.681976 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.682046 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc27z\" (UniqueName: \"kubernetes.io/projected/8469ab87-8e4a-4347-abbb-c8cc3500fde6-kube-api-access-pc27z\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.682061 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8469ab87-8e4a-4347-abbb-c8cc3500fde6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.727193 4679 scope.go:117] "RemoveContainer" containerID="64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924" Feb 03 12:44:24 crc kubenswrapper[4679]: E0203 12:44:24.727688 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924\": container with ID starting with 64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924 not found: ID does not exist" containerID="64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.727725 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924"} err="failed to get container status \"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924\": rpc error: code = NotFound desc = could not find container \"64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924\": container with ID starting with 64f63608055c09475c88285a5c04856c888e1a5e36a12925f0a703ca374ab924 not found: ID does not exist" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.727752 4679 scope.go:117] "RemoveContainer" containerID="9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c" Feb 03 12:44:24 crc kubenswrapper[4679]: E0203 12:44:24.728070 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c\": container with ID starting with 9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c not found: ID does not exist" containerID="9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.728097 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c"} err="failed to get container status \"9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c\": rpc error: code = NotFound desc = could not find container \"9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c\": container with ID starting with 9f51bd9591bfdbf6f7cd1dfc4f03cee47e942c81b15976eb3b72b6749c8fe18c not found: ID does not exist" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.728117 4679 scope.go:117] "RemoveContainer" containerID="119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da" Feb 03 12:44:24 crc kubenswrapper[4679]: E0203 12:44:24.728338 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da\": container with ID starting with 119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da not found: ID does not exist" containerID="119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.728379 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da"} err="failed to get container status \"119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da\": rpc error: code = NotFound desc = could not find container \"119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da\": container with ID starting with 119567aea44c1fc4400f7da24d352c1339939b5a90c0d83c10708f3aeadfb4da not found: ID does not exist" Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.938879 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:24 crc kubenswrapper[4679]: I0203 12:44:24.952536 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbcw9"] Feb 03 12:44:26 crc kubenswrapper[4679]: I0203 12:44:26.222934 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" path="/var/lib/kubelet/pods/8469ab87-8e4a-4347-abbb-c8cc3500fde6/volumes" Feb 03 12:44:36 crc kubenswrapper[4679]: I0203 12:44:36.736045 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:44:36 crc kubenswrapper[4679]: I0203 12:44:36.736680 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:44:58 crc kubenswrapper[4679]: I0203 12:44:58.908726 4679 generic.go:334] "Generic (PLEG): container finished" podID="67eba320-30c8-4f6e-9958-f58ee00e9bdc" containerID="e0ed9a9c9b75fe2d19dd5181e2ec0f3aad6b9eed9bdaeb191b581741a9bfa446" exitCode=0 Feb 03 12:44:58 crc kubenswrapper[4679]: I0203 12:44:58.908815 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" event={"ID":"67eba320-30c8-4f6e-9958-f58ee00e9bdc","Type":"ContainerDied","Data":"e0ed9a9c9b75fe2d19dd5181e2ec0f3aad6b9eed9bdaeb191b581741a9bfa446"} Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.149773 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl"] Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.150603 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.150621 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.151162 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151180 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.151205 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151214 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.151227 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151234 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.151245 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151253 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="extract-content" Feb 03 12:45:00 crc kubenswrapper[4679]: E0203 12:45:00.151281 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151290 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="extract-utilities" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151610 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="8469ab87-8e4a-4347-abbb-c8cc3500fde6" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.151630 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd46d18-b431-4ec7-b007-641aa420717e" containerName="registry-server" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.152501 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.154346 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.154545 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.168438 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl"] Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.237168 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.237341 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjgkk\" (UniqueName: \"kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.237417 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.339421 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.339551 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.339622 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjgkk\" (UniqueName: \"kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.340514 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.347341 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.357492 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjgkk\" (UniqueName: \"kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk\") pod \"collect-profiles-29502045-c2ndl\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.423448 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.477784 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.542034 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle\") pod \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.542276 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory\") pod \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.542323 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam\") pod \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.542541 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxfbr\" (UniqueName: \"kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr\") pod \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.542636 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0\") pod \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\" (UID: \"67eba320-30c8-4f6e-9958-f58ee00e9bdc\") " Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.548277 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "67eba320-30c8-4f6e-9958-f58ee00e9bdc" (UID: "67eba320-30c8-4f6e-9958-f58ee00e9bdc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.548578 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr" (OuterVolumeSpecName: "kube-api-access-mxfbr") pod "67eba320-30c8-4f6e-9958-f58ee00e9bdc" (UID: "67eba320-30c8-4f6e-9958-f58ee00e9bdc"). InnerVolumeSpecName "kube-api-access-mxfbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.578776 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory" (OuterVolumeSpecName: "inventory") pod "67eba320-30c8-4f6e-9958-f58ee00e9bdc" (UID: "67eba320-30c8-4f6e-9958-f58ee00e9bdc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.583641 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67eba320-30c8-4f6e-9958-f58ee00e9bdc" (UID: "67eba320-30c8-4f6e-9958-f58ee00e9bdc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.599588 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "67eba320-30c8-4f6e-9958-f58ee00e9bdc" (UID: "67eba320-30c8-4f6e-9958-f58ee00e9bdc"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.645013 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxfbr\" (UniqueName: \"kubernetes.io/projected/67eba320-30c8-4f6e-9958-f58ee00e9bdc-kube-api-access-mxfbr\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.645056 4679 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.645066 4679 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.645075 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.645085 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eba320-30c8-4f6e-9958-f58ee00e9bdc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.892743 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl"] Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.928349 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" event={"ID":"67eba320-30c8-4f6e-9958-f58ee00e9bdc","Type":"ContainerDied","Data":"3e63135fa27c8b87e0b12528433e7f466ad875870ecfeb5d920534e4ebd47f36"} Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.928650 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e63135fa27c8b87e0b12528433e7f466ad875870ecfeb5d920534e4ebd47f36" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.928414 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5" Feb 03 12:45:00 crc kubenswrapper[4679]: I0203 12:45:00.930589 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" event={"ID":"573ad06e-d73b-430e-8041-322bc2d60e4a","Type":"ContainerStarted","Data":"7117b6af6c124fb1d04f0c080cbd5aac42e622f8927480b7c1ab0a059dd5c90b"} Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.021947 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87"] Feb 03 12:45:01 crc kubenswrapper[4679]: E0203 12:45:01.022318 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67eba320-30c8-4f6e-9958-f58ee00e9bdc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.022337 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="67eba320-30c8-4f6e-9958-f58ee00e9bdc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.022542 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="67eba320-30c8-4f6e-9958-f58ee00e9bdc" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.023124 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.026123 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.026475 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.026739 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.026991 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.027182 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.027334 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.027514 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.037524 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87"] Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055311 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055396 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055470 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055540 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055562 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055785 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9dn\" (UniqueName: \"kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055842 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055925 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.055991 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158116 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158216 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158302 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158384 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158435 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.158465 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.159616 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds9dn\" (UniqueName: \"kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.159653 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.163258 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.164860 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.164972 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.165201 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.165825 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.167095 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.168994 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.169650 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.178412 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds9dn\" (UniqueName: \"kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn\") pod \"nova-edpm-deployment-openstack-edpm-ipam-82h87\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.344721 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.894248 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87"] Feb 03 12:45:01 crc kubenswrapper[4679]: W0203 12:45:01.897979 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb00ee047_2435_41ba_b376_be13d8309d1f.slice/crio-2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8 WatchSource:0}: Error finding container 2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8: Status 404 returned error can't find the container with id 2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8 Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.939742 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" event={"ID":"b00ee047-2435-41ba-b376-be13d8309d1f","Type":"ContainerStarted","Data":"2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8"} Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.941560 4679 generic.go:334] "Generic (PLEG): container finished" podID="573ad06e-d73b-430e-8041-322bc2d60e4a" containerID="5bca75c56ccc3bac488b2d08e51382254fc06a210de2a00f2835844b6fd591ae" exitCode=0 Feb 03 12:45:01 crc kubenswrapper[4679]: I0203 12:45:01.941628 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" event={"ID":"573ad06e-d73b-430e-8041-322bc2d60e4a","Type":"ContainerDied","Data":"5bca75c56ccc3bac488b2d08e51382254fc06a210de2a00f2835844b6fd591ae"} Feb 03 12:45:02 crc kubenswrapper[4679]: I0203 12:45:02.950254 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" event={"ID":"b00ee047-2435-41ba-b376-be13d8309d1f","Type":"ContainerStarted","Data":"3deaa5f1812fbf97f25619311822f4b3d16778c1f81272756733cdcc928e17d9"} Feb 03 12:45:02 crc kubenswrapper[4679]: I0203 12:45:02.972698 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" podStartSLOduration=1.253501601 podStartE2EDuration="1.97266236s" podCreationTimestamp="2026-02-03 12:45:01 +0000 UTC" firstStartedPulling="2026-02-03 12:45:01.904382107 +0000 UTC m=+2374.379278195" lastFinishedPulling="2026-02-03 12:45:02.623542836 +0000 UTC m=+2375.098438954" observedRunningTime="2026-02-03 12:45:02.966379991 +0000 UTC m=+2375.441276099" watchObservedRunningTime="2026-02-03 12:45:02.97266236 +0000 UTC m=+2375.447558448" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.255704 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.303888 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume\") pod \"573ad06e-d73b-430e-8041-322bc2d60e4a\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.304071 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjgkk\" (UniqueName: \"kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk\") pod \"573ad06e-d73b-430e-8041-322bc2d60e4a\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.304198 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume\") pod \"573ad06e-d73b-430e-8041-322bc2d60e4a\" (UID: \"573ad06e-d73b-430e-8041-322bc2d60e4a\") " Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.304857 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume" (OuterVolumeSpecName: "config-volume") pod "573ad06e-d73b-430e-8041-322bc2d60e4a" (UID: "573ad06e-d73b-430e-8041-322bc2d60e4a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.305930 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/573ad06e-d73b-430e-8041-322bc2d60e4a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.310242 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "573ad06e-d73b-430e-8041-322bc2d60e4a" (UID: "573ad06e-d73b-430e-8041-322bc2d60e4a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.310336 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk" (OuterVolumeSpecName: "kube-api-access-cjgkk") pod "573ad06e-d73b-430e-8041-322bc2d60e4a" (UID: "573ad06e-d73b-430e-8041-322bc2d60e4a"). InnerVolumeSpecName "kube-api-access-cjgkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.407854 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjgkk\" (UniqueName: \"kubernetes.io/projected/573ad06e-d73b-430e-8041-322bc2d60e4a-kube-api-access-cjgkk\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.407921 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/573ad06e-d73b-430e-8041-322bc2d60e4a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.958788 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.958852 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502045-c2ndl" event={"ID":"573ad06e-d73b-430e-8041-322bc2d60e4a","Type":"ContainerDied","Data":"7117b6af6c124fb1d04f0c080cbd5aac42e622f8927480b7c1ab0a059dd5c90b"} Feb 03 12:45:03 crc kubenswrapper[4679]: I0203 12:45:03.958947 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7117b6af6c124fb1d04f0c080cbd5aac42e622f8927480b7c1ab0a059dd5c90b" Feb 03 12:45:04 crc kubenswrapper[4679]: I0203 12:45:04.335388 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq"] Feb 03 12:45:04 crc kubenswrapper[4679]: I0203 12:45:04.343582 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502000-z4fmq"] Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.223282 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871a99a3-a5e1-4e7a-926d-5168fec4b91e" path="/var/lib/kubelet/pods/871a99a3-a5e1-4e7a-926d-5168fec4b91e/volumes" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.735604 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.735676 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.735726 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.737122 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.737286 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" gracePeriod=600 Feb 03 12:45:06 crc kubenswrapper[4679]: E0203 12:45:06.874673 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.988788 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" exitCode=0 Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.988869 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620"} Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.988985 4679 scope.go:117] "RemoveContainer" containerID="f8b640d616a097de390c17670aa347eacf35caf5e83ed996a2a3d78316e76fdb" Feb 03 12:45:06 crc kubenswrapper[4679]: I0203 12:45:06.990420 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:45:06 crc kubenswrapper[4679]: E0203 12:45:06.990946 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:45:19 crc kubenswrapper[4679]: I0203 12:45:19.213080 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:45:19 crc kubenswrapper[4679]: E0203 12:45:19.213916 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:45:30 crc kubenswrapper[4679]: I0203 12:45:30.211986 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:45:30 crc kubenswrapper[4679]: E0203 12:45:30.212736 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:45:35 crc kubenswrapper[4679]: I0203 12:45:35.360936 4679 scope.go:117] "RemoveContainer" containerID="72131e58eaab6c1360b9526fc2747bf4e5da6eb4c95ec8e104ab86b716e21853" Feb 03 12:45:41 crc kubenswrapper[4679]: I0203 12:45:41.211685 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:45:41 crc kubenswrapper[4679]: E0203 12:45:41.212735 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:45:56 crc kubenswrapper[4679]: I0203 12:45:56.212973 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:45:56 crc kubenswrapper[4679]: E0203 12:45:56.214343 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:46:12 crc kubenswrapper[4679]: I0203 12:46:12.211918 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:46:12 crc kubenswrapper[4679]: E0203 12:46:12.212847 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.937831 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:16 crc kubenswrapper[4679]: E0203 12:46:16.938970 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573ad06e-d73b-430e-8041-322bc2d60e4a" containerName="collect-profiles" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.938993 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="573ad06e-d73b-430e-8041-322bc2d60e4a" containerName="collect-profiles" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.939259 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="573ad06e-d73b-430e-8041-322bc2d60e4a" containerName="collect-profiles" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.941295 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.949243 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.975908 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.976204 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:16 crc kubenswrapper[4679]: I0203 12:46:16.976334 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7h97\" (UniqueName: \"kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.078699 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7h97\" (UniqueName: \"kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.079216 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.079371 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.079720 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.079832 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.098057 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7h97\" (UniqueName: \"kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97\") pod \"certified-operators-pfmxb\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.269920 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:17 crc kubenswrapper[4679]: I0203 12:46:17.801942 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:18 crc kubenswrapper[4679]: I0203 12:46:18.689018 4679 generic.go:334] "Generic (PLEG): container finished" podID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerID="486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358" exitCode=0 Feb 03 12:46:18 crc kubenswrapper[4679]: I0203 12:46:18.689082 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerDied","Data":"486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358"} Feb 03 12:46:18 crc kubenswrapper[4679]: I0203 12:46:18.689341 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerStarted","Data":"1d966bf4c0905041ae600abf11163c1f738623e1ac86426e9dd5321dc927f2c2"} Feb 03 12:46:18 crc kubenswrapper[4679]: I0203 12:46:18.691504 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:46:19 crc kubenswrapper[4679]: I0203 12:46:19.699216 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerStarted","Data":"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421"} Feb 03 12:46:20 crc kubenswrapper[4679]: I0203 12:46:20.709378 4679 generic.go:334] "Generic (PLEG): container finished" podID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerID="ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421" exitCode=0 Feb 03 12:46:20 crc kubenswrapper[4679]: I0203 12:46:20.709432 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerDied","Data":"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421"} Feb 03 12:46:21 crc kubenswrapper[4679]: I0203 12:46:21.719837 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerStarted","Data":"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6"} Feb 03 12:46:21 crc kubenswrapper[4679]: I0203 12:46:21.745935 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pfmxb" podStartSLOduration=3.298192847 podStartE2EDuration="5.745916822s" podCreationTimestamp="2026-02-03 12:46:16 +0000 UTC" firstStartedPulling="2026-02-03 12:46:18.691234617 +0000 UTC m=+2451.166130705" lastFinishedPulling="2026-02-03 12:46:21.138958592 +0000 UTC m=+2453.613854680" observedRunningTime="2026-02-03 12:46:21.741237113 +0000 UTC m=+2454.216133211" watchObservedRunningTime="2026-02-03 12:46:21.745916822 +0000 UTC m=+2454.220812910" Feb 03 12:46:25 crc kubenswrapper[4679]: I0203 12:46:25.211835 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:46:25 crc kubenswrapper[4679]: E0203 12:46:25.212489 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:46:27 crc kubenswrapper[4679]: I0203 12:46:27.270120 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:27 crc kubenswrapper[4679]: I0203 12:46:27.271103 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:27 crc kubenswrapper[4679]: I0203 12:46:27.321325 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:27 crc kubenswrapper[4679]: I0203 12:46:27.834652 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:28 crc kubenswrapper[4679]: I0203 12:46:28.722365 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:29 crc kubenswrapper[4679]: I0203 12:46:29.801682 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pfmxb" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="registry-server" containerID="cri-o://b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6" gracePeriod=2 Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.236563 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.358070 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content\") pod \"5c20d969-706f-419f-b2a1-5c24ce67711f\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.358199 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7h97\" (UniqueName: \"kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97\") pod \"5c20d969-706f-419f-b2a1-5c24ce67711f\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.358504 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities\") pod \"5c20d969-706f-419f-b2a1-5c24ce67711f\" (UID: \"5c20d969-706f-419f-b2a1-5c24ce67711f\") " Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.359386 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities" (OuterVolumeSpecName: "utilities") pod "5c20d969-706f-419f-b2a1-5c24ce67711f" (UID: "5c20d969-706f-419f-b2a1-5c24ce67711f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.364749 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97" (OuterVolumeSpecName: "kube-api-access-k7h97") pod "5c20d969-706f-419f-b2a1-5c24ce67711f" (UID: "5c20d969-706f-419f-b2a1-5c24ce67711f"). InnerVolumeSpecName "kube-api-access-k7h97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.460424 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7h97\" (UniqueName: \"kubernetes.io/projected/5c20d969-706f-419f-b2a1-5c24ce67711f-kube-api-access-k7h97\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.460463 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.811821 4679 generic.go:334] "Generic (PLEG): container finished" podID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerID="b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6" exitCode=0 Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.811884 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerDied","Data":"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6"} Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.811912 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfmxb" event={"ID":"5c20d969-706f-419f-b2a1-5c24ce67711f","Type":"ContainerDied","Data":"1d966bf4c0905041ae600abf11163c1f738623e1ac86426e9dd5321dc927f2c2"} Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.811950 4679 scope.go:117] "RemoveContainer" containerID="b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.812098 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfmxb" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.833750 4679 scope.go:117] "RemoveContainer" containerID="ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.842944 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c20d969-706f-419f-b2a1-5c24ce67711f" (UID: "5c20d969-706f-419f-b2a1-5c24ce67711f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.864280 4679 scope.go:117] "RemoveContainer" containerID="486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.872881 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c20d969-706f-419f-b2a1-5c24ce67711f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.913890 4679 scope.go:117] "RemoveContainer" containerID="b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6" Feb 03 12:46:30 crc kubenswrapper[4679]: E0203 12:46:30.914961 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6\": container with ID starting with b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6 not found: ID does not exist" containerID="b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.915013 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6"} err="failed to get container status \"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6\": rpc error: code = NotFound desc = could not find container \"b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6\": container with ID starting with b0ec8060f89f64114b08b802bd55c6920bd5df76f55ad4f7bef6bdf12ed2b2f6 not found: ID does not exist" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.915048 4679 scope.go:117] "RemoveContainer" containerID="ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421" Feb 03 12:46:30 crc kubenswrapper[4679]: E0203 12:46:30.915574 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421\": container with ID starting with ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421 not found: ID does not exist" containerID="ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.915637 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421"} err="failed to get container status \"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421\": rpc error: code = NotFound desc = could not find container \"ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421\": container with ID starting with ecb49114e80b6ffa106be25558c2118a059c25d5b46c48cf7ef3e39a5d8a1421 not found: ID does not exist" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.915678 4679 scope.go:117] "RemoveContainer" containerID="486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358" Feb 03 12:46:30 crc kubenswrapper[4679]: E0203 12:46:30.916111 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358\": container with ID starting with 486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358 not found: ID does not exist" containerID="486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358" Feb 03 12:46:30 crc kubenswrapper[4679]: I0203 12:46:30.916209 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358"} err="failed to get container status \"486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358\": rpc error: code = NotFound desc = could not find container \"486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358\": container with ID starting with 486df9567422c940cba81a224134150b406e991bd5f604064fd212fea6acf358 not found: ID does not exist" Feb 03 12:46:31 crc kubenswrapper[4679]: I0203 12:46:31.148131 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:31 crc kubenswrapper[4679]: I0203 12:46:31.155661 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pfmxb"] Feb 03 12:46:32 crc kubenswrapper[4679]: I0203 12:46:32.222163 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" path="/var/lib/kubelet/pods/5c20d969-706f-419f-b2a1-5c24ce67711f/volumes" Feb 03 12:46:36 crc kubenswrapper[4679]: I0203 12:46:36.212008 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:46:36 crc kubenswrapper[4679]: E0203 12:46:36.212881 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:46:51 crc kubenswrapper[4679]: I0203 12:46:51.212188 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:46:51 crc kubenswrapper[4679]: E0203 12:46:51.213180 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:47:02 crc kubenswrapper[4679]: I0203 12:47:02.212938 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:47:02 crc kubenswrapper[4679]: E0203 12:47:02.214316 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:47:06 crc kubenswrapper[4679]: I0203 12:47:06.114500 4679 generic.go:334] "Generic (PLEG): container finished" podID="b00ee047-2435-41ba-b376-be13d8309d1f" containerID="3deaa5f1812fbf97f25619311822f4b3d16778c1f81272756733cdcc928e17d9" exitCode=0 Feb 03 12:47:06 crc kubenswrapper[4679]: I0203 12:47:06.114609 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" event={"ID":"b00ee047-2435-41ba-b376-be13d8309d1f","Type":"ContainerDied","Data":"3deaa5f1812fbf97f25619311822f4b3d16778c1f81272756733cdcc928e17d9"} Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.537019 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598612 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598686 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598712 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598764 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598822 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598861 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598910 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds9dn\" (UniqueName: \"kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.598951 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.599005 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam\") pod \"b00ee047-2435-41ba-b376-be13d8309d1f\" (UID: \"b00ee047-2435-41ba-b376-be13d8309d1f\") " Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.605886 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.610655 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn" (OuterVolumeSpecName: "kube-api-access-ds9dn") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "kube-api-access-ds9dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.629582 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.631274 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.634324 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.637747 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory" (OuterVolumeSpecName: "inventory") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.641668 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.641877 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.644766 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b00ee047-2435-41ba-b376-be13d8309d1f" (UID: "b00ee047-2435-41ba-b376-be13d8309d1f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700596 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700628 4679 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b00ee047-2435-41ba-b376-be13d8309d1f-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700638 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700648 4679 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700655 4679 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700664 4679 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700673 4679 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700682 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds9dn\" (UniqueName: \"kubernetes.io/projected/b00ee047-2435-41ba-b376-be13d8309d1f-kube-api-access-ds9dn\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:07 crc kubenswrapper[4679]: I0203 12:47:07.700690 4679 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00ee047-2435-41ba-b376-be13d8309d1f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.132546 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" event={"ID":"b00ee047-2435-41ba-b376-be13d8309d1f","Type":"ContainerDied","Data":"2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8"} Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.132876 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a820bd319268a0437e3a4326e33fa1a1fab4f1e2853d926d0341d38a75ca7f8" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.132841 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-82h87" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.239967 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq"] Feb 03 12:47:08 crc kubenswrapper[4679]: E0203 12:47:08.240461 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="extract-utilities" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240482 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="extract-utilities" Feb 03 12:47:08 crc kubenswrapper[4679]: E0203 12:47:08.240516 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="extract-content" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240524 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="extract-content" Feb 03 12:47:08 crc kubenswrapper[4679]: E0203 12:47:08.240543 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00ee047-2435-41ba-b376-be13d8309d1f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240551 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00ee047-2435-41ba-b376-be13d8309d1f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:08 crc kubenswrapper[4679]: E0203 12:47:08.240576 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="registry-server" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240586 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="registry-server" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240800 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00ee047-2435-41ba-b376-be13d8309d1f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.240827 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c20d969-706f-419f-b2a1-5c24ce67711f" containerName="registry-server" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.241559 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.246230 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.246576 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.246941 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.247143 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ss7lg" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.249489 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.256737 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq"] Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313688 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313751 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftgbj\" (UniqueName: \"kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313799 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313875 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313950 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.313986 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.314054 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416162 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416205 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftgbj\" (UniqueName: \"kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416226 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416275 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416323 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416347 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.416399 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.421072 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.421179 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.421551 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.422664 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.424020 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.424758 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.435462 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftgbj\" (UniqueName: \"kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:08 crc kubenswrapper[4679]: I0203 12:47:08.559919 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:47:09 crc kubenswrapper[4679]: I0203 12:47:09.138283 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq"] Feb 03 12:47:09 crc kubenswrapper[4679]: I0203 12:47:09.151176 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" event={"ID":"fbcf4978-33e8-4444-b972-dd9859e52ec0","Type":"ContainerStarted","Data":"014abceca4a2f21762210d793bbdafe55993b87b666aac4d3fa85e64091f7138"} Feb 03 12:47:10 crc kubenswrapper[4679]: I0203 12:47:10.160145 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" event={"ID":"fbcf4978-33e8-4444-b972-dd9859e52ec0","Type":"ContainerStarted","Data":"170c06dee13ff65e82442f9ad84eca53c669b19954130174c8ee10b14435411a"} Feb 03 12:47:10 crc kubenswrapper[4679]: I0203 12:47:10.182255 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" podStartSLOduration=1.521608696 podStartE2EDuration="2.18223543s" podCreationTimestamp="2026-02-03 12:47:08 +0000 UTC" firstStartedPulling="2026-02-03 12:47:09.141117946 +0000 UTC m=+2501.616014034" lastFinishedPulling="2026-02-03 12:47:09.80174468 +0000 UTC m=+2502.276640768" observedRunningTime="2026-02-03 12:47:10.176467392 +0000 UTC m=+2502.651363500" watchObservedRunningTime="2026-02-03 12:47:10.18223543 +0000 UTC m=+2502.657131518" Feb 03 12:47:17 crc kubenswrapper[4679]: I0203 12:47:17.212297 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:47:17 crc kubenswrapper[4679]: E0203 12:47:17.213163 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:47:30 crc kubenswrapper[4679]: I0203 12:47:30.211717 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:47:30 crc kubenswrapper[4679]: E0203 12:47:30.212590 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:47:43 crc kubenswrapper[4679]: I0203 12:47:43.212160 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:47:43 crc kubenswrapper[4679]: E0203 12:47:43.212947 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:47:56 crc kubenswrapper[4679]: I0203 12:47:56.213597 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:47:56 crc kubenswrapper[4679]: E0203 12:47:56.214454 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:48:07 crc kubenswrapper[4679]: I0203 12:48:07.211601 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:48:07 crc kubenswrapper[4679]: E0203 12:48:07.212430 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:48:20 crc kubenswrapper[4679]: I0203 12:48:20.211682 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:48:20 crc kubenswrapper[4679]: E0203 12:48:20.212439 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:48:33 crc kubenswrapper[4679]: I0203 12:48:33.213264 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:48:33 crc kubenswrapper[4679]: E0203 12:48:33.214098 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:48:44 crc kubenswrapper[4679]: I0203 12:48:44.212654 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:48:44 crc kubenswrapper[4679]: E0203 12:48:44.213673 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:48:55 crc kubenswrapper[4679]: I0203 12:48:55.211937 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:48:55 crc kubenswrapper[4679]: E0203 12:48:55.213723 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:49:08 crc kubenswrapper[4679]: I0203 12:49:08.222775 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:49:08 crc kubenswrapper[4679]: E0203 12:49:08.224255 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:49:20 crc kubenswrapper[4679]: I0203 12:49:20.212017 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:49:20 crc kubenswrapper[4679]: E0203 12:49:20.213019 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:49:23 crc kubenswrapper[4679]: I0203 12:49:23.299860 4679 generic.go:334] "Generic (PLEG): container finished" podID="fbcf4978-33e8-4444-b972-dd9859e52ec0" containerID="170c06dee13ff65e82442f9ad84eca53c669b19954130174c8ee10b14435411a" exitCode=0 Feb 03 12:49:23 crc kubenswrapper[4679]: I0203 12:49:23.299930 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" event={"ID":"fbcf4978-33e8-4444-b972-dd9859e52ec0","Type":"ContainerDied","Data":"170c06dee13ff65e82442f9ad84eca53c669b19954130174c8ee10b14435411a"} Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.698742 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755655 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755692 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755737 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755789 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftgbj\" (UniqueName: \"kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755811 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755867 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.755946 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory\") pod \"fbcf4978-33e8-4444-b972-dd9859e52ec0\" (UID: \"fbcf4978-33e8-4444-b972-dd9859e52ec0\") " Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.762190 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj" (OuterVolumeSpecName: "kube-api-access-ftgbj") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "kube-api-access-ftgbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.762292 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.784876 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.785482 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.786869 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.788647 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory" (OuterVolumeSpecName: "inventory") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.797938 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fbcf4978-33e8-4444-b972-dd9859e52ec0" (UID: "fbcf4978-33e8-4444-b972-dd9859e52ec0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858639 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858681 4679 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858696 4679 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858708 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftgbj\" (UniqueName: \"kubernetes.io/projected/fbcf4978-33e8-4444-b972-dd9859e52ec0-kube-api-access-ftgbj\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858721 4679 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858731 4679 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:24 crc kubenswrapper[4679]: I0203 12:49:24.858741 4679 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbcf4978-33e8-4444-b972-dd9859e52ec0-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 12:49:25 crc kubenswrapper[4679]: I0203 12:49:25.325907 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" event={"ID":"fbcf4978-33e8-4444-b972-dd9859e52ec0","Type":"ContainerDied","Data":"014abceca4a2f21762210d793bbdafe55993b87b666aac4d3fa85e64091f7138"} Feb 03 12:49:25 crc kubenswrapper[4679]: I0203 12:49:25.325958 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014abceca4a2f21762210d793bbdafe55993b87b666aac4d3fa85e64091f7138" Feb 03 12:49:25 crc kubenswrapper[4679]: I0203 12:49:25.326095 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq" Feb 03 12:49:33 crc kubenswrapper[4679]: I0203 12:49:33.213129 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:49:33 crc kubenswrapper[4679]: E0203 12:49:33.214270 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:49:48 crc kubenswrapper[4679]: I0203 12:49:48.218599 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:49:48 crc kubenswrapper[4679]: E0203 12:49:48.220005 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:49:59 crc kubenswrapper[4679]: I0203 12:49:59.211698 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:49:59 crc kubenswrapper[4679]: E0203 12:49:59.212535 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:50:10 crc kubenswrapper[4679]: I0203 12:50:10.212802 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:50:10 crc kubenswrapper[4679]: I0203 12:50:10.694619 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae"} Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.498753 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 12:50:23 crc kubenswrapper[4679]: E0203 12:50:23.499773 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcf4978-33e8-4444-b972-dd9859e52ec0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.499826 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcf4978-33e8-4444-b972-dd9859e52ec0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.500082 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcf4978-33e8-4444-b972-dd9859e52ec0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.500880 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.503327 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.503532 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.503688 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.504133 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cj5p9" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.507690 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.539186 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.539243 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.539273 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.539662 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.540084 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfczf\" (UniqueName: \"kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.540271 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.540402 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.540472 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.540492 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642466 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfczf\" (UniqueName: \"kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642531 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642563 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642589 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642609 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642676 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642728 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.642762 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.643621 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.644078 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.644332 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.644921 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.644982 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.656451 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.668045 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.669894 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.672090 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfczf\" (UniqueName: \"kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.719432 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " pod="openstack/tempest-tests-tempest" Feb 03 12:50:23 crc kubenswrapper[4679]: I0203 12:50:23.827938 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 12:50:24 crc kubenswrapper[4679]: I0203 12:50:24.266338 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 12:50:24 crc kubenswrapper[4679]: I0203 12:50:24.824142 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a210a822-5111-45b8-9068-e745a7471962","Type":"ContainerStarted","Data":"ff9d435bfd1580f2e2af42573c51040b9dbd9378792398b934cea1017b30610b"} Feb 03 12:51:01 crc kubenswrapper[4679]: E0203 12:51:01.070472 4679 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 03 12:51:01 crc kubenswrapper[4679]: E0203 12:51:01.071290 4679 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfczf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a210a822-5111-45b8-9068-e745a7471962): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 12:51:01 crc kubenswrapper[4679]: E0203 12:51:01.072687 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a210a822-5111-45b8-9068-e745a7471962" Feb 03 12:51:01 crc kubenswrapper[4679]: E0203 12:51:01.158068 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a210a822-5111-45b8-9068-e745a7471962" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.530763 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.534316 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.542455 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.708704 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.709101 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.709286 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5gm\" (UniqueName: \"kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.811331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5gm\" (UniqueName: \"kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.811411 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.811513 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.811976 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.812251 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.835157 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5gm\" (UniqueName: \"kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm\") pod \"redhat-operators-8fsdl\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:15 crc kubenswrapper[4679]: I0203 12:51:15.858906 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:16 crc kubenswrapper[4679]: I0203 12:51:16.323784 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:51:16 crc kubenswrapper[4679]: W0203 12:51:16.324116 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a34ff4_cffa_4c59_ae85_6f0f3c79e6ed.slice/crio-a8659799316bde845aaf225147654d2159f26a2090c6867583563c18130ec732 WatchSource:0}: Error finding container a8659799316bde845aaf225147654d2159f26a2090c6867583563c18130ec732: Status 404 returned error can't find the container with id a8659799316bde845aaf225147654d2159f26a2090c6867583563c18130ec732 Feb 03 12:51:16 crc kubenswrapper[4679]: I0203 12:51:16.638880 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 12:51:17 crc kubenswrapper[4679]: I0203 12:51:17.307495 4679 generic.go:334] "Generic (PLEG): container finished" podID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerID="4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1" exitCode=0 Feb 03 12:51:17 crc kubenswrapper[4679]: I0203 12:51:17.307548 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerDied","Data":"4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1"} Feb 03 12:51:17 crc kubenswrapper[4679]: I0203 12:51:17.307574 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerStarted","Data":"a8659799316bde845aaf225147654d2159f26a2090c6867583563c18130ec732"} Feb 03 12:51:18 crc kubenswrapper[4679]: I0203 12:51:18.316947 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerStarted","Data":"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09"} Feb 03 12:51:18 crc kubenswrapper[4679]: I0203 12:51:18.319316 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a210a822-5111-45b8-9068-e745a7471962","Type":"ContainerStarted","Data":"000f0eac60af9a82525046facfa27c6e7795d92b33a1ab61127477a6a7952836"} Feb 03 12:51:25 crc kubenswrapper[4679]: I0203 12:51:25.378349 4679 generic.go:334] "Generic (PLEG): container finished" podID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerID="5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09" exitCode=0 Feb 03 12:51:25 crc kubenswrapper[4679]: I0203 12:51:25.378472 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerDied","Data":"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09"} Feb 03 12:51:25 crc kubenswrapper[4679]: I0203 12:51:25.381427 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:51:25 crc kubenswrapper[4679]: I0203 12:51:25.402739 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=11.041081479 podStartE2EDuration="1m3.402719253s" podCreationTimestamp="2026-02-03 12:50:22 +0000 UTC" firstStartedPulling="2026-02-03 12:50:24.274784333 +0000 UTC m=+2696.749680421" lastFinishedPulling="2026-02-03 12:51:16.636422107 +0000 UTC m=+2749.111318195" observedRunningTime="2026-02-03 12:51:18.362125175 +0000 UTC m=+2750.837021263" watchObservedRunningTime="2026-02-03 12:51:25.402719253 +0000 UTC m=+2757.877615351" Feb 03 12:51:29 crc kubenswrapper[4679]: I0203 12:51:29.425174 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerStarted","Data":"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f"} Feb 03 12:51:29 crc kubenswrapper[4679]: I0203 12:51:29.458329 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fsdl" podStartSLOduration=3.182497575 podStartE2EDuration="14.45830352s" podCreationTimestamp="2026-02-03 12:51:15 +0000 UTC" firstStartedPulling="2026-02-03 12:51:17.311129852 +0000 UTC m=+2749.786025940" lastFinishedPulling="2026-02-03 12:51:28.586935807 +0000 UTC m=+2761.061831885" observedRunningTime="2026-02-03 12:51:29.450048888 +0000 UTC m=+2761.924944986" watchObservedRunningTime="2026-02-03 12:51:29.45830352 +0000 UTC m=+2761.933199618" Feb 03 12:51:35 crc kubenswrapper[4679]: I0203 12:51:35.859842 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:35 crc kubenswrapper[4679]: I0203 12:51:35.860371 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:51:36 crc kubenswrapper[4679]: I0203 12:51:36.909930 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" probeResult="failure" output=< Feb 03 12:51:36 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:51:36 crc kubenswrapper[4679]: > Feb 03 12:51:46 crc kubenswrapper[4679]: I0203 12:51:46.903828 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" probeResult="failure" output=< Feb 03 12:51:46 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:51:46 crc kubenswrapper[4679]: > Feb 03 12:51:56 crc kubenswrapper[4679]: I0203 12:51:56.907600 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" probeResult="failure" output=< Feb 03 12:51:56 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:51:56 crc kubenswrapper[4679]: > Feb 03 12:52:06 crc kubenswrapper[4679]: I0203 12:52:06.913608 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" probeResult="failure" output=< Feb 03 12:52:06 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:52:06 crc kubenswrapper[4679]: > Feb 03 12:52:16 crc kubenswrapper[4679]: I0203 12:52:16.910299 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" probeResult="failure" output=< Feb 03 12:52:16 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 12:52:16 crc kubenswrapper[4679]: > Feb 03 12:52:25 crc kubenswrapper[4679]: I0203 12:52:25.907252 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:52:25 crc kubenswrapper[4679]: I0203 12:52:25.966588 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:52:26 crc kubenswrapper[4679]: I0203 12:52:26.143499 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:52:27 crc kubenswrapper[4679]: I0203 12:52:27.934253 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fsdl" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" containerID="cri-o://01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f" gracePeriod=2 Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.384558 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.491926 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content\") pod \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.492216 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5gm\" (UniqueName: \"kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm\") pod \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.492289 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities\") pod \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\" (UID: \"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed\") " Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.492911 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities" (OuterVolumeSpecName: "utilities") pod "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" (UID: "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.500546 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm" (OuterVolumeSpecName: "kube-api-access-5l5gm") pod "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" (UID: "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed"). InnerVolumeSpecName "kube-api-access-5l5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.593763 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.593803 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5gm\" (UniqueName: \"kubernetes.io/projected/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-kube-api-access-5l5gm\") on node \"crc\" DevicePath \"\"" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.616653 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" (UID: "a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.695536 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.946078 4679 generic.go:334] "Generic (PLEG): container finished" podID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerID="01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f" exitCode=0 Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.946123 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerDied","Data":"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f"} Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.946154 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fsdl" event={"ID":"a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed","Type":"ContainerDied","Data":"a8659799316bde845aaf225147654d2159f26a2090c6867583563c18130ec732"} Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.946154 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fsdl" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.946187 4679 scope.go:117] "RemoveContainer" containerID="01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.967829 4679 scope.go:117] "RemoveContainer" containerID="5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09" Feb 03 12:52:28 crc kubenswrapper[4679]: I0203 12:52:28.993518 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.004651 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fsdl"] Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.006626 4679 scope.go:117] "RemoveContainer" containerID="4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.044751 4679 scope.go:117] "RemoveContainer" containerID="01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f" Feb 03 12:52:29 crc kubenswrapper[4679]: E0203 12:52:29.045471 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f\": container with ID starting with 01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f not found: ID does not exist" containerID="01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.045536 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f"} err="failed to get container status \"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f\": rpc error: code = NotFound desc = could not find container \"01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f\": container with ID starting with 01700080e88696c75b3f78582f4bcc322fe20d94fbc17679e11f1142ea64e15f not found: ID does not exist" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.045569 4679 scope.go:117] "RemoveContainer" containerID="5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09" Feb 03 12:52:29 crc kubenswrapper[4679]: E0203 12:52:29.046048 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09\": container with ID starting with 5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09 not found: ID does not exist" containerID="5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.046119 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09"} err="failed to get container status \"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09\": rpc error: code = NotFound desc = could not find container \"5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09\": container with ID starting with 5ae6b99416e2e08b5a93f5dd286e8eea078c61d904c782208a2944dc9f725c09 not found: ID does not exist" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.046174 4679 scope.go:117] "RemoveContainer" containerID="4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1" Feb 03 12:52:29 crc kubenswrapper[4679]: E0203 12:52:29.046670 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1\": container with ID starting with 4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1 not found: ID does not exist" containerID="4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1" Feb 03 12:52:29 crc kubenswrapper[4679]: I0203 12:52:29.046703 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1"} err="failed to get container status \"4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1\": rpc error: code = NotFound desc = could not find container \"4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1\": container with ID starting with 4b6cd316e8ae8f0bd57d93d1958fddcbe68a0581883f4da33cdc2f58089615c1 not found: ID does not exist" Feb 03 12:52:30 crc kubenswrapper[4679]: I0203 12:52:30.223849 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" path="/var/lib/kubelet/pods/a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed/volumes" Feb 03 12:52:36 crc kubenswrapper[4679]: I0203 12:52:36.736132 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:52:36 crc kubenswrapper[4679]: I0203 12:52:36.736781 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:53:06 crc kubenswrapper[4679]: I0203 12:53:06.735402 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:53:06 crc kubenswrapper[4679]: I0203 12:53:06.735857 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:53:36 crc kubenswrapper[4679]: I0203 12:53:36.735549 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:53:36 crc kubenswrapper[4679]: I0203 12:53:36.736196 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:53:36 crc kubenswrapper[4679]: I0203 12:53:36.736247 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:53:36 crc kubenswrapper[4679]: I0203 12:53:36.737085 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:53:36 crc kubenswrapper[4679]: I0203 12:53:36.737155 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae" gracePeriod=600 Feb 03 12:53:37 crc kubenswrapper[4679]: I0203 12:53:37.612492 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae" exitCode=0 Feb 03 12:53:37 crc kubenswrapper[4679]: I0203 12:53:37.612583 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae"} Feb 03 12:53:37 crc kubenswrapper[4679]: I0203 12:53:37.613054 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91"} Feb 03 12:53:37 crc kubenswrapper[4679]: I0203 12:53:37.613078 4679 scope.go:117] "RemoveContainer" containerID="e2b93f9dd5bd74cc3f1c1f2cdee08e3603974a2b97d76601f778ccc9d20a5620" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.454769 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:54:47 crc kubenswrapper[4679]: E0203 12:54:47.456912 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="extract-utilities" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.457110 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="extract-utilities" Feb 03 12:54:47 crc kubenswrapper[4679]: E0203 12:54:47.457215 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.457296 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" Feb 03 12:54:47 crc kubenswrapper[4679]: E0203 12:54:47.457423 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="extract-content" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.457513 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="extract-content" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.457876 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a34ff4-cffa-4c59-ae85-6f0f3c79e6ed" containerName="registry-server" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.459596 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.468217 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.584411 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.584473 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96845\" (UniqueName: \"kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.584586 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.686364 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.686405 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96845\" (UniqueName: \"kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.686458 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.687221 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.687297 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.715606 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96845\" (UniqueName: \"kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845\") pod \"community-operators-nfw7z\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:47 crc kubenswrapper[4679]: I0203 12:54:47.786005 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:48 crc kubenswrapper[4679]: I0203 12:54:48.415668 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:54:49 crc kubenswrapper[4679]: I0203 12:54:49.229588 4679 generic.go:334] "Generic (PLEG): container finished" podID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerID="3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da" exitCode=0 Feb 03 12:54:49 crc kubenswrapper[4679]: I0203 12:54:49.230132 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerDied","Data":"3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da"} Feb 03 12:54:49 crc kubenswrapper[4679]: I0203 12:54:49.232505 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerStarted","Data":"ec75aa8d5f6612d1199ce01b02c2d1e7ac7616fe4ddec0749e7252b721b98378"} Feb 03 12:54:50 crc kubenswrapper[4679]: I0203 12:54:50.243647 4679 generic.go:334] "Generic (PLEG): container finished" podID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerID="56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04" exitCode=0 Feb 03 12:54:50 crc kubenswrapper[4679]: I0203 12:54:50.244169 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerDied","Data":"56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04"} Feb 03 12:54:52 crc kubenswrapper[4679]: I0203 12:54:52.262698 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerStarted","Data":"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446"} Feb 03 12:54:57 crc kubenswrapper[4679]: I0203 12:54:57.786297 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:57 crc kubenswrapper[4679]: I0203 12:54:57.786888 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:57 crc kubenswrapper[4679]: I0203 12:54:57.845656 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:57 crc kubenswrapper[4679]: I0203 12:54:57.875589 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfw7z" podStartSLOduration=9.383147399 podStartE2EDuration="10.875569968s" podCreationTimestamp="2026-02-03 12:54:47 +0000 UTC" firstStartedPulling="2026-02-03 12:54:49.233059165 +0000 UTC m=+2961.707955293" lastFinishedPulling="2026-02-03 12:54:50.725481774 +0000 UTC m=+2963.200377862" observedRunningTime="2026-02-03 12:54:52.293011323 +0000 UTC m=+2964.767907421" watchObservedRunningTime="2026-02-03 12:54:57.875569968 +0000 UTC m=+2970.350466056" Feb 03 12:54:58 crc kubenswrapper[4679]: I0203 12:54:58.360141 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:54:58 crc kubenswrapper[4679]: I0203 12:54:58.406813 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.332067 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nfw7z" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="registry-server" containerID="cri-o://541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446" gracePeriod=2 Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.811089 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.958232 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities\") pod \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.958321 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96845\" (UniqueName: \"kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845\") pod \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.958481 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content\") pod \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\" (UID: \"15a7e0ac-d139-410b-a4dd-994e1bfd8450\") " Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.959423 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities" (OuterVolumeSpecName: "utilities") pod "15a7e0ac-d139-410b-a4dd-994e1bfd8450" (UID: "15a7e0ac-d139-410b-a4dd-994e1bfd8450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:55:00 crc kubenswrapper[4679]: I0203 12:55:00.969603 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845" (OuterVolumeSpecName: "kube-api-access-96845") pod "15a7e0ac-d139-410b-a4dd-994e1bfd8450" (UID: "15a7e0ac-d139-410b-a4dd-994e1bfd8450"). InnerVolumeSpecName "kube-api-access-96845". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.020619 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15a7e0ac-d139-410b-a4dd-994e1bfd8450" (UID: "15a7e0ac-d139-410b-a4dd-994e1bfd8450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.061736 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.061791 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96845\" (UniqueName: \"kubernetes.io/projected/15a7e0ac-d139-410b-a4dd-994e1bfd8450-kube-api-access-96845\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.061802 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a7e0ac-d139-410b-a4dd-994e1bfd8450-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.344469 4679 generic.go:334] "Generic (PLEG): container finished" podID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerID="541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446" exitCode=0 Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.344549 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfw7z" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.344570 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerDied","Data":"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446"} Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.345227 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfw7z" event={"ID":"15a7e0ac-d139-410b-a4dd-994e1bfd8450","Type":"ContainerDied","Data":"ec75aa8d5f6612d1199ce01b02c2d1e7ac7616fe4ddec0749e7252b721b98378"} Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.345277 4679 scope.go:117] "RemoveContainer" containerID="541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.394532 4679 scope.go:117] "RemoveContainer" containerID="56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.410332 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.439872 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nfw7z"] Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.469206 4679 scope.go:117] "RemoveContainer" containerID="3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.487578 4679 scope.go:117] "RemoveContainer" containerID="541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446" Feb 03 12:55:01 crc kubenswrapper[4679]: E0203 12:55:01.488007 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446\": container with ID starting with 541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446 not found: ID does not exist" containerID="541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.488043 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446"} err="failed to get container status \"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446\": rpc error: code = NotFound desc = could not find container \"541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446\": container with ID starting with 541ba378ad2c3d1625f0968a34297ac3e15f071d0db568fec7236bf951f14446 not found: ID does not exist" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.488063 4679 scope.go:117] "RemoveContainer" containerID="56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04" Feb 03 12:55:01 crc kubenswrapper[4679]: E0203 12:55:01.488465 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04\": container with ID starting with 56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04 not found: ID does not exist" containerID="56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.488527 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04"} err="failed to get container status \"56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04\": rpc error: code = NotFound desc = could not find container \"56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04\": container with ID starting with 56169c7c0028b904c8c0233881be560dce4293f4fee16c4a61c526e2dd388e04 not found: ID does not exist" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.488570 4679 scope.go:117] "RemoveContainer" containerID="3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da" Feb 03 12:55:01 crc kubenswrapper[4679]: E0203 12:55:01.488831 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da\": container with ID starting with 3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da not found: ID does not exist" containerID="3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da" Feb 03 12:55:01 crc kubenswrapper[4679]: I0203 12:55:01.488855 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da"} err="failed to get container status \"3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da\": rpc error: code = NotFound desc = could not find container \"3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da\": container with ID starting with 3fdac34c77c8b0571eba44eec253bdfe5e9150521b3ffb85499c337f9f9d18da not found: ID does not exist" Feb 03 12:55:02 crc kubenswrapper[4679]: I0203 12:55:02.223244 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" path="/var/lib/kubelet/pods/15a7e0ac-d139-410b-a4dd-994e1bfd8450/volumes" Feb 03 12:56:06 crc kubenswrapper[4679]: I0203 12:56:06.735788 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:56:06 crc kubenswrapper[4679]: I0203 12:56:06.736346 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:56:36 crc kubenswrapper[4679]: I0203 12:56:36.736183 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:56:36 crc kubenswrapper[4679]: I0203 12:56:36.736810 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.761936 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:42 crc kubenswrapper[4679]: E0203 12:56:42.762922 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="extract-content" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.762939 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="extract-content" Feb 03 12:56:42 crc kubenswrapper[4679]: E0203 12:56:42.762958 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="extract-utilities" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.762966 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="extract-utilities" Feb 03 12:56:42 crc kubenswrapper[4679]: E0203 12:56:42.762981 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="registry-server" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.762989 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="registry-server" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.763244 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a7e0ac-d139-410b-a4dd-994e1bfd8450" containerName="registry-server" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.765099 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.775677 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.902240 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scdqv\" (UniqueName: \"kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.902345 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:42 crc kubenswrapper[4679]: I0203 12:56:42.902498 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.004471 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scdqv\" (UniqueName: \"kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.004555 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.004634 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.005096 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.005140 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.028605 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scdqv\" (UniqueName: \"kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv\") pod \"certified-operators-2zx9n\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.095956 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:43 crc kubenswrapper[4679]: I0203 12:56:43.593033 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:44 crc kubenswrapper[4679]: I0203 12:56:44.201386 4679 generic.go:334] "Generic (PLEG): container finished" podID="b27d318f-21fa-4c3f-8308-792618288545" containerID="8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a" exitCode=0 Feb 03 12:56:44 crc kubenswrapper[4679]: I0203 12:56:44.201453 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerDied","Data":"8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a"} Feb 03 12:56:44 crc kubenswrapper[4679]: I0203 12:56:44.201747 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerStarted","Data":"ce63afb130050a14adb2ba9273cf764c3b88beaef40d28bbd0558f53d882be6b"} Feb 03 12:56:44 crc kubenswrapper[4679]: I0203 12:56:44.203514 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 12:56:45 crc kubenswrapper[4679]: I0203 12:56:45.215226 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerStarted","Data":"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee"} Feb 03 12:56:46 crc kubenswrapper[4679]: I0203 12:56:46.225443 4679 generic.go:334] "Generic (PLEG): container finished" podID="b27d318f-21fa-4c3f-8308-792618288545" containerID="b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee" exitCode=0 Feb 03 12:56:46 crc kubenswrapper[4679]: I0203 12:56:46.225562 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerDied","Data":"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee"} Feb 03 12:56:46 crc kubenswrapper[4679]: I0203 12:56:46.225826 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerStarted","Data":"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462"} Feb 03 12:56:46 crc kubenswrapper[4679]: I0203 12:56:46.243867 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2zx9n" podStartSLOduration=2.558340494 podStartE2EDuration="4.243848159s" podCreationTimestamp="2026-02-03 12:56:42 +0000 UTC" firstStartedPulling="2026-02-03 12:56:44.203241691 +0000 UTC m=+3076.678137779" lastFinishedPulling="2026-02-03 12:56:45.888749356 +0000 UTC m=+3078.363645444" observedRunningTime="2026-02-03 12:56:46.242496555 +0000 UTC m=+3078.717392643" watchObservedRunningTime="2026-02-03 12:56:46.243848159 +0000 UTC m=+3078.718744267" Feb 03 12:56:53 crc kubenswrapper[4679]: I0203 12:56:53.096232 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:53 crc kubenswrapper[4679]: I0203 12:56:53.096869 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:53 crc kubenswrapper[4679]: I0203 12:56:53.139907 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:53 crc kubenswrapper[4679]: I0203 12:56:53.355258 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:53 crc kubenswrapper[4679]: I0203 12:56:53.399305 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.331266 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2zx9n" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="registry-server" containerID="cri-o://af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462" gracePeriod=2 Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.819090 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.961693 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scdqv\" (UniqueName: \"kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv\") pod \"b27d318f-21fa-4c3f-8308-792618288545\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.961740 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content\") pod \"b27d318f-21fa-4c3f-8308-792618288545\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.961869 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities\") pod \"b27d318f-21fa-4c3f-8308-792618288545\" (UID: \"b27d318f-21fa-4c3f-8308-792618288545\") " Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.963508 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities" (OuterVolumeSpecName: "utilities") pod "b27d318f-21fa-4c3f-8308-792618288545" (UID: "b27d318f-21fa-4c3f-8308-792618288545"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:56:55 crc kubenswrapper[4679]: I0203 12:56:55.969874 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv" (OuterVolumeSpecName: "kube-api-access-scdqv") pod "b27d318f-21fa-4c3f-8308-792618288545" (UID: "b27d318f-21fa-4c3f-8308-792618288545"). InnerVolumeSpecName "kube-api-access-scdqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.064340 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.064744 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scdqv\" (UniqueName: \"kubernetes.io/projected/b27d318f-21fa-4c3f-8308-792618288545-kube-api-access-scdqv\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.102174 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b27d318f-21fa-4c3f-8308-792618288545" (UID: "b27d318f-21fa-4c3f-8308-792618288545"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.166308 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27d318f-21fa-4c3f-8308-792618288545-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.342406 4679 generic.go:334] "Generic (PLEG): container finished" podID="b27d318f-21fa-4c3f-8308-792618288545" containerID="af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462" exitCode=0 Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.342476 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2zx9n" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.342500 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerDied","Data":"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462"} Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.343527 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2zx9n" event={"ID":"b27d318f-21fa-4c3f-8308-792618288545","Type":"ContainerDied","Data":"ce63afb130050a14adb2ba9273cf764c3b88beaef40d28bbd0558f53d882be6b"} Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.343579 4679 scope.go:117] "RemoveContainer" containerID="af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.383464 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.389256 4679 scope.go:117] "RemoveContainer" containerID="b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.394239 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2zx9n"] Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.420749 4679 scope.go:117] "RemoveContainer" containerID="8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.480181 4679 scope.go:117] "RemoveContainer" containerID="af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462" Feb 03 12:56:56 crc kubenswrapper[4679]: E0203 12:56:56.480778 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462\": container with ID starting with af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462 not found: ID does not exist" containerID="af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.480822 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462"} err="failed to get container status \"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462\": rpc error: code = NotFound desc = could not find container \"af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462\": container with ID starting with af9c80fbfef5b0e00f1c2ceb2a96b33ae8168e7d5bc9e7c2025584914295e462 not found: ID does not exist" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.480853 4679 scope.go:117] "RemoveContainer" containerID="b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee" Feb 03 12:56:56 crc kubenswrapper[4679]: E0203 12:56:56.481295 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee\": container with ID starting with b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee not found: ID does not exist" containerID="b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.481336 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee"} err="failed to get container status \"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee\": rpc error: code = NotFound desc = could not find container \"b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee\": container with ID starting with b4f763b8d71c24181bcbbb839f28e0fb75c6048a0c27c44922b66759d5fca3ee not found: ID does not exist" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.481383 4679 scope.go:117] "RemoveContainer" containerID="8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a" Feb 03 12:56:56 crc kubenswrapper[4679]: E0203 12:56:56.481794 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a\": container with ID starting with 8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a not found: ID does not exist" containerID="8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a" Feb 03 12:56:56 crc kubenswrapper[4679]: I0203 12:56:56.481825 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a"} err="failed to get container status \"8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a\": rpc error: code = NotFound desc = could not find container \"8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a\": container with ID starting with 8ded867a36ab6050e6f8fdc790925382768f2cf767f67a38f084cf544a35484a not found: ID does not exist" Feb 03 12:56:58 crc kubenswrapper[4679]: I0203 12:56:58.221665 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27d318f-21fa-4c3f-8308-792618288545" path="/var/lib/kubelet/pods/b27d318f-21fa-4c3f-8308-792618288545/volumes" Feb 03 12:57:06 crc kubenswrapper[4679]: I0203 12:57:06.736244 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 12:57:06 crc kubenswrapper[4679]: I0203 12:57:06.736907 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 12:57:06 crc kubenswrapper[4679]: I0203 12:57:06.736987 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 12:57:06 crc kubenswrapper[4679]: I0203 12:57:06.738514 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 12:57:06 crc kubenswrapper[4679]: I0203 12:57:06.738768 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" gracePeriod=600 Feb 03 12:57:06 crc kubenswrapper[4679]: E0203 12:57:06.872734 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:57:07 crc kubenswrapper[4679]: I0203 12:57:07.438101 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" exitCode=0 Feb 03 12:57:07 crc kubenswrapper[4679]: I0203 12:57:07.438151 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91"} Feb 03 12:57:07 crc kubenswrapper[4679]: I0203 12:57:07.438190 4679 scope.go:117] "RemoveContainer" containerID="5c22cbffe7c5e4756198776ddbc4a86dc06d61b5aeb81b3a6f04c18c85b1e1ae" Feb 03 12:57:07 crc kubenswrapper[4679]: I0203 12:57:07.439015 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:57:07 crc kubenswrapper[4679]: E0203 12:57:07.439306 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:57:20 crc kubenswrapper[4679]: I0203 12:57:20.211825 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:57:20 crc kubenswrapper[4679]: E0203 12:57:20.212726 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:57:32 crc kubenswrapper[4679]: I0203 12:57:32.212169 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:57:32 crc kubenswrapper[4679]: E0203 12:57:32.212951 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:57:47 crc kubenswrapper[4679]: I0203 12:57:47.211674 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:57:47 crc kubenswrapper[4679]: E0203 12:57:47.212430 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:58:00 crc kubenswrapper[4679]: I0203 12:58:00.211915 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:58:00 crc kubenswrapper[4679]: E0203 12:58:00.212909 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:58:12 crc kubenswrapper[4679]: I0203 12:58:12.211969 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:58:12 crc kubenswrapper[4679]: E0203 12:58:12.212849 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:58:26 crc kubenswrapper[4679]: I0203 12:58:26.212610 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:58:26 crc kubenswrapper[4679]: E0203 12:58:26.213801 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:58:41 crc kubenswrapper[4679]: I0203 12:58:41.212312 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:58:41 crc kubenswrapper[4679]: E0203 12:58:41.213066 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:58:53 crc kubenswrapper[4679]: I0203 12:58:53.212033 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:58:53 crc kubenswrapper[4679]: E0203 12:58:53.212826 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:59:07 crc kubenswrapper[4679]: I0203 12:59:07.212134 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:59:07 crc kubenswrapper[4679]: E0203 12:59:07.212957 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:59:20 crc kubenswrapper[4679]: I0203 12:59:20.211538 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:59:20 crc kubenswrapper[4679]: E0203 12:59:20.212349 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:59:34 crc kubenswrapper[4679]: I0203 12:59:34.212216 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:59:34 crc kubenswrapper[4679]: E0203 12:59:34.213092 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 12:59:48 crc kubenswrapper[4679]: I0203 12:59:48.216795 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 12:59:48 crc kubenswrapper[4679]: E0203 12:59:48.217637 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.148571 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt"] Feb 03 13:00:00 crc kubenswrapper[4679]: E0203 13:00:00.149392 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="extract-content" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.149404 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="extract-content" Feb 03 13:00:00 crc kubenswrapper[4679]: E0203 13:00:00.149428 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="extract-utilities" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.149434 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="extract-utilities" Feb 03 13:00:00 crc kubenswrapper[4679]: E0203 13:00:00.149460 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="registry-server" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.149466 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="registry-server" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.149636 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27d318f-21fa-4c3f-8308-792618288545" containerName="registry-server" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.150248 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.152647 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.152862 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.168483 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt"] Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.296438 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4jb\" (UniqueName: \"kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.297031 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.297140 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.399184 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.399230 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.399333 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4jb\" (UniqueName: \"kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.400345 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.408244 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.417666 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4jb\" (UniqueName: \"kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb\") pod \"collect-profiles-29502060-7zjpt\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.475799 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.920420 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt"] Feb 03 13:00:00 crc kubenswrapper[4679]: I0203 13:00:00.966982 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" event={"ID":"62788198-c28d-439b-a82e-8a4c34c8a7e7","Type":"ContainerStarted","Data":"da2fe03ce61817c09d0473a33f86fe2871be833af06fa344a77281bcd9f713e6"} Feb 03 13:00:01 crc kubenswrapper[4679]: I0203 13:00:01.211822 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:00:01 crc kubenswrapper[4679]: E0203 13:00:01.212141 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:01 crc kubenswrapper[4679]: I0203 13:00:01.976706 4679 generic.go:334] "Generic (PLEG): container finished" podID="62788198-c28d-439b-a82e-8a4c34c8a7e7" containerID="3e35ed2a5384d7264498f8adef4de9f4b5ab1c118d2e499b6eb1d351475bdc79" exitCode=0 Feb 03 13:00:01 crc kubenswrapper[4679]: I0203 13:00:01.976910 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" event={"ID":"62788198-c28d-439b-a82e-8a4c34c8a7e7","Type":"ContainerDied","Data":"3e35ed2a5384d7264498f8adef4de9f4b5ab1c118d2e499b6eb1d351475bdc79"} Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.352256 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.455342 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn4jb\" (UniqueName: \"kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb\") pod \"62788198-c28d-439b-a82e-8a4c34c8a7e7\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.455623 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume\") pod \"62788198-c28d-439b-a82e-8a4c34c8a7e7\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.455775 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume\") pod \"62788198-c28d-439b-a82e-8a4c34c8a7e7\" (UID: \"62788198-c28d-439b-a82e-8a4c34c8a7e7\") " Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.456788 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "62788198-c28d-439b-a82e-8a4c34c8a7e7" (UID: "62788198-c28d-439b-a82e-8a4c34c8a7e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.463325 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb" (OuterVolumeSpecName: "kube-api-access-mn4jb") pod "62788198-c28d-439b-a82e-8a4c34c8a7e7" (UID: "62788198-c28d-439b-a82e-8a4c34c8a7e7"). InnerVolumeSpecName "kube-api-access-mn4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.463556 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62788198-c28d-439b-a82e-8a4c34c8a7e7" (UID: "62788198-c28d-439b-a82e-8a4c34c8a7e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.558416 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62788198-c28d-439b-a82e-8a4c34c8a7e7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.558459 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn4jb\" (UniqueName: \"kubernetes.io/projected/62788198-c28d-439b-a82e-8a4c34c8a7e7-kube-api-access-mn4jb\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.558472 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62788198-c28d-439b-a82e-8a4c34c8a7e7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.993948 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" event={"ID":"62788198-c28d-439b-a82e-8a4c34c8a7e7","Type":"ContainerDied","Data":"da2fe03ce61817c09d0473a33f86fe2871be833af06fa344a77281bcd9f713e6"} Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.994283 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2fe03ce61817c09d0473a33f86fe2871be833af06fa344a77281bcd9f713e6" Feb 03 13:00:03 crc kubenswrapper[4679]: I0203 13:00:03.994023 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-7zjpt" Feb 03 13:00:04 crc kubenswrapper[4679]: I0203 13:00:04.436794 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd"] Feb 03 13:00:04 crc kubenswrapper[4679]: I0203 13:00:04.445281 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502015-r2krd"] Feb 03 13:00:06 crc kubenswrapper[4679]: I0203 13:00:06.226543 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22114ac5-b236-4e7c-ba0a-5703373937b2" path="/var/lib/kubelet/pods/22114ac5-b236-4e7c-ba0a-5703373937b2/volumes" Feb 03 13:00:15 crc kubenswrapper[4679]: I0203 13:00:15.211528 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:00:15 crc kubenswrapper[4679]: E0203 13:00:15.212385 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:28 crc kubenswrapper[4679]: I0203 13:00:28.217638 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:00:28 crc kubenswrapper[4679]: E0203 13:00:28.218231 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:35 crc kubenswrapper[4679]: I0203 13:00:35.777497 4679 scope.go:117] "RemoveContainer" containerID="57fbf4c40bdc33bd0f195128acf9911c1ef9038b0f6d97dfada7c1aee2b196fc" Feb 03 13:00:42 crc kubenswrapper[4679]: I0203 13:00:42.211845 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:00:42 crc kubenswrapper[4679]: E0203 13:00:42.212673 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.577987 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:43 crc kubenswrapper[4679]: E0203 13:00:43.578687 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62788198-c28d-439b-a82e-8a4c34c8a7e7" containerName="collect-profiles" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.578700 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="62788198-c28d-439b-a82e-8a4c34c8a7e7" containerName="collect-profiles" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.578886 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="62788198-c28d-439b-a82e-8a4c34c8a7e7" containerName="collect-profiles" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.580267 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.599980 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.679471 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnln\" (UniqueName: \"kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.679527 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.679631 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.781885 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.782040 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbnln\" (UniqueName: \"kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.782070 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.782554 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.782570 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.806156 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbnln\" (UniqueName: \"kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln\") pod \"redhat-marketplace-sh4dp\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:43 crc kubenswrapper[4679]: I0203 13:00:43.900680 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:44 crc kubenswrapper[4679]: I0203 13:00:44.435051 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:45 crc kubenswrapper[4679]: I0203 13:00:45.348301 4679 generic.go:334] "Generic (PLEG): container finished" podID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerID="6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce" exitCode=0 Feb 03 13:00:45 crc kubenswrapper[4679]: I0203 13:00:45.348403 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerDied","Data":"6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce"} Feb 03 13:00:45 crc kubenswrapper[4679]: I0203 13:00:45.349488 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerStarted","Data":"bf3ed304e70837fcadd9403aff602dc9b81341119b08b90b02df18bef7cae03f"} Feb 03 13:00:47 crc kubenswrapper[4679]: I0203 13:00:47.367918 4679 generic.go:334] "Generic (PLEG): container finished" podID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerID="f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0" exitCode=0 Feb 03 13:00:47 crc kubenswrapper[4679]: I0203 13:00:47.368012 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerDied","Data":"f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0"} Feb 03 13:00:48 crc kubenswrapper[4679]: I0203 13:00:48.377224 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerStarted","Data":"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3"} Feb 03 13:00:48 crc kubenswrapper[4679]: I0203 13:00:48.403851 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sh4dp" podStartSLOduration=2.928475443 podStartE2EDuration="5.403820216s" podCreationTimestamp="2026-02-03 13:00:43 +0000 UTC" firstStartedPulling="2026-02-03 13:00:45.3500864 +0000 UTC m=+3317.824982488" lastFinishedPulling="2026-02-03 13:00:47.825431173 +0000 UTC m=+3320.300327261" observedRunningTime="2026-02-03 13:00:48.400269406 +0000 UTC m=+3320.875165494" watchObservedRunningTime="2026-02-03 13:00:48.403820216 +0000 UTC m=+3320.878716294" Feb 03 13:00:53 crc kubenswrapper[4679]: I0203 13:00:53.901704 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:53 crc kubenswrapper[4679]: I0203 13:00:53.902237 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:53 crc kubenswrapper[4679]: I0203 13:00:53.947762 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:54 crc kubenswrapper[4679]: I0203 13:00:54.465565 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:54 crc kubenswrapper[4679]: I0203 13:00:54.516719 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:56 crc kubenswrapper[4679]: I0203 13:00:56.212421 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:00:56 crc kubenswrapper[4679]: E0203 13:00:56.213132 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:00:56 crc kubenswrapper[4679]: I0203 13:00:56.635812 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sh4dp" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="registry-server" containerID="cri-o://b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3" gracePeriod=2 Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.198388 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.260128 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbnln\" (UniqueName: \"kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln\") pod \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.260462 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities\") pod \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.260500 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content\") pod \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\" (UID: \"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440\") " Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.261668 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities" (OuterVolumeSpecName: "utilities") pod "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" (UID: "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.266720 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln" (OuterVolumeSpecName: "kube-api-access-qbnln") pod "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" (UID: "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440"). InnerVolumeSpecName "kube-api-access-qbnln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.288316 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" (UID: "bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.362049 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbnln\" (UniqueName: \"kubernetes.io/projected/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-kube-api-access-qbnln\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.362077 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.362086 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.664416 4679 generic.go:334] "Generic (PLEG): container finished" podID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerID="b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3" exitCode=0 Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.664707 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sh4dp" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.664731 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerDied","Data":"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3"} Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.664989 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sh4dp" event={"ID":"bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440","Type":"ContainerDied","Data":"bf3ed304e70837fcadd9403aff602dc9b81341119b08b90b02df18bef7cae03f"} Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.665024 4679 scope.go:117] "RemoveContainer" containerID="b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.685789 4679 scope.go:117] "RemoveContainer" containerID="f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.705523 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.713668 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sh4dp"] Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.725622 4679 scope.go:117] "RemoveContainer" containerID="6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.764046 4679 scope.go:117] "RemoveContainer" containerID="b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3" Feb 03 13:00:57 crc kubenswrapper[4679]: E0203 13:00:57.764857 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3\": container with ID starting with b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3 not found: ID does not exist" containerID="b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.764893 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3"} err="failed to get container status \"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3\": rpc error: code = NotFound desc = could not find container \"b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3\": container with ID starting with b7f179bfbbcd998931667fce35fb053c7c3ce83e30d68bb294363dc29f04dcb3 not found: ID does not exist" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.764913 4679 scope.go:117] "RemoveContainer" containerID="f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0" Feb 03 13:00:57 crc kubenswrapper[4679]: E0203 13:00:57.765289 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0\": container with ID starting with f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0 not found: ID does not exist" containerID="f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.765336 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0"} err="failed to get container status \"f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0\": rpc error: code = NotFound desc = could not find container \"f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0\": container with ID starting with f9c22561917b0b4353ad740d31f88735b4434853a7fd1ee99a1b83fc4f5185a0 not found: ID does not exist" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.765377 4679 scope.go:117] "RemoveContainer" containerID="6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce" Feb 03 13:00:57 crc kubenswrapper[4679]: E0203 13:00:57.765707 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce\": container with ID starting with 6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce not found: ID does not exist" containerID="6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce" Feb 03 13:00:57 crc kubenswrapper[4679]: I0203 13:00:57.765755 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce"} err="failed to get container status \"6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce\": rpc error: code = NotFound desc = could not find container \"6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce\": container with ID starting with 6b366c572d13973dff0b71304278ca961a2d29aba1fb5d80ad43c1321ebe84ce not found: ID does not exist" Feb 03 13:00:58 crc kubenswrapper[4679]: I0203 13:00:58.222187 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" path="/var/lib/kubelet/pods/bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440/volumes" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.156423 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29502061-wb5cr"] Feb 03 13:01:00 crc kubenswrapper[4679]: E0203 13:01:00.157198 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="extract-utilities" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.157216 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="extract-utilities" Feb 03 13:01:00 crc kubenswrapper[4679]: E0203 13:01:00.157229 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="registry-server" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.157239 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="registry-server" Feb 03 13:01:00 crc kubenswrapper[4679]: E0203 13:01:00.157278 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="extract-content" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.157285 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="extract-content" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.157552 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdbab2c-5509-4cb7-a7d9-18bd2f1a5440" containerName="registry-server" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.158230 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.169862 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502061-wb5cr"] Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.220501 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.220583 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffqk\" (UniqueName: \"kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.220701 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.220731 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.322687 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.322751 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.322798 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.322849 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffqk\" (UniqueName: \"kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.328858 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.328884 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.329518 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.338871 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffqk\" (UniqueName: \"kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk\") pod \"keystone-cron-29502061-wb5cr\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.478939 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:00 crc kubenswrapper[4679]: I0203 13:01:00.921963 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502061-wb5cr"] Feb 03 13:01:01 crc kubenswrapper[4679]: I0203 13:01:01.704698 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-wb5cr" event={"ID":"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3","Type":"ContainerStarted","Data":"78d79d38b826cbb517d75a5fcd3e79ddb11a3e902e955353a5b07a593a74ea04"} Feb 03 13:01:01 crc kubenswrapper[4679]: I0203 13:01:01.705236 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-wb5cr" event={"ID":"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3","Type":"ContainerStarted","Data":"4f367d6d048e1c3c7871b250eff498f3f7760e90b9ea4156a873a6f012776067"} Feb 03 13:01:01 crc kubenswrapper[4679]: I0203 13:01:01.741545 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29502061-wb5cr" podStartSLOduration=1.741518473 podStartE2EDuration="1.741518473s" podCreationTimestamp="2026-02-03 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:01:01.728700476 +0000 UTC m=+3334.203596594" watchObservedRunningTime="2026-02-03 13:01:01.741518473 +0000 UTC m=+3334.216414551" Feb 03 13:01:03 crc kubenswrapper[4679]: I0203 13:01:03.724612 4679 generic.go:334] "Generic (PLEG): container finished" podID="23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" containerID="78d79d38b826cbb517d75a5fcd3e79ddb11a3e902e955353a5b07a593a74ea04" exitCode=0 Feb 03 13:01:03 crc kubenswrapper[4679]: I0203 13:01:03.724657 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-wb5cr" event={"ID":"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3","Type":"ContainerDied","Data":"78d79d38b826cbb517d75a5fcd3e79ddb11a3e902e955353a5b07a593a74ea04"} Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.180536 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.229635 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffqk\" (UniqueName: \"kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk\") pod \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.229826 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data\") pod \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.229963 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle\") pod \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.230038 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys\") pod \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\" (UID: \"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3\") " Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.237181 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk" (OuterVolumeSpecName: "kube-api-access-wffqk") pod "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" (UID: "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3"). InnerVolumeSpecName "kube-api-access-wffqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.238534 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" (UID: "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.265299 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" (UID: "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.292833 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data" (OuterVolumeSpecName: "config-data") pod "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" (UID: "23cead04-2ba2-47aa-8b2c-fe29c2a25fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.334082 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wffqk\" (UniqueName: \"kubernetes.io/projected/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-kube-api-access-wffqk\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.334126 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.334139 4679 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.334150 4679 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23cead04-2ba2-47aa-8b2c-fe29c2a25fb3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.747853 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502061-wb5cr" event={"ID":"23cead04-2ba2-47aa-8b2c-fe29c2a25fb3","Type":"ContainerDied","Data":"4f367d6d048e1c3c7871b250eff498f3f7760e90b9ea4156a873a6f012776067"} Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.747900 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f367d6d048e1c3c7871b250eff498f3f7760e90b9ea4156a873a6f012776067" Feb 03 13:01:05 crc kubenswrapper[4679]: I0203 13:01:05.747960 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502061-wb5cr" Feb 03 13:01:07 crc kubenswrapper[4679]: I0203 13:01:07.212293 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:01:07 crc kubenswrapper[4679]: E0203 13:01:07.212622 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:01:20 crc kubenswrapper[4679]: I0203 13:01:20.211809 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:01:20 crc kubenswrapper[4679]: E0203 13:01:20.212628 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.384078 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:24 crc kubenswrapper[4679]: E0203 13:01:24.385340 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" containerName="keystone-cron" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.385392 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" containerName="keystone-cron" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.385734 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="23cead04-2ba2-47aa-8b2c-fe29c2a25fb3" containerName="keystone-cron" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.390282 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.394839 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.457416 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.457554 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.457712 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwqk\" (UniqueName: \"kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.559876 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgwqk\" (UniqueName: \"kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.560111 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.560195 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.560642 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.560949 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.582424 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgwqk\" (UniqueName: \"kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk\") pod \"redhat-operators-dkvz9\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:24 crc kubenswrapper[4679]: I0203 13:01:24.713281 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:25 crc kubenswrapper[4679]: I0203 13:01:25.225544 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:25 crc kubenswrapper[4679]: I0203 13:01:25.916491 4679 generic.go:334] "Generic (PLEG): container finished" podID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerID="84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695" exitCode=0 Feb 03 13:01:25 crc kubenswrapper[4679]: I0203 13:01:25.916934 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerDied","Data":"84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695"} Feb 03 13:01:25 crc kubenswrapper[4679]: I0203 13:01:25.916989 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerStarted","Data":"5187f900120e713de0dbfd3a3d4c70ce73c1c12eef22356104c0cb32db2398d6"} Feb 03 13:01:26 crc kubenswrapper[4679]: I0203 13:01:26.931606 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerStarted","Data":"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e"} Feb 03 13:01:29 crc kubenswrapper[4679]: I0203 13:01:29.978500 4679 generic.go:334] "Generic (PLEG): container finished" podID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerID="374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e" exitCode=0 Feb 03 13:01:29 crc kubenswrapper[4679]: I0203 13:01:29.978554 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerDied","Data":"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e"} Feb 03 13:01:30 crc kubenswrapper[4679]: I0203 13:01:30.991875 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerStarted","Data":"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4"} Feb 03 13:01:31 crc kubenswrapper[4679]: I0203 13:01:31.015230 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dkvz9" podStartSLOduration=2.232833849 podStartE2EDuration="7.015209078s" podCreationTimestamp="2026-02-03 13:01:24 +0000 UTC" firstStartedPulling="2026-02-03 13:01:25.918442868 +0000 UTC m=+3358.393338956" lastFinishedPulling="2026-02-03 13:01:30.700818087 +0000 UTC m=+3363.175714185" observedRunningTime="2026-02-03 13:01:31.006014513 +0000 UTC m=+3363.480910611" watchObservedRunningTime="2026-02-03 13:01:31.015209078 +0000 UTC m=+3363.490105166" Feb 03 13:01:32 crc kubenswrapper[4679]: I0203 13:01:32.211584 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:01:32 crc kubenswrapper[4679]: E0203 13:01:32.212124 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:01:34 crc kubenswrapper[4679]: I0203 13:01:34.713666 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:34 crc kubenswrapper[4679]: I0203 13:01:34.714013 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:35 crc kubenswrapper[4679]: I0203 13:01:35.759834 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dkvz9" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="registry-server" probeResult="failure" output=< Feb 03 13:01:35 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 13:01:35 crc kubenswrapper[4679]: > Feb 03 13:01:44 crc kubenswrapper[4679]: I0203 13:01:44.211778 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:01:44 crc kubenswrapper[4679]: E0203 13:01:44.212661 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:01:44 crc kubenswrapper[4679]: I0203 13:01:44.761110 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:44 crc kubenswrapper[4679]: I0203 13:01:44.820448 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:44 crc kubenswrapper[4679]: I0203 13:01:44.997508 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.159007 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dkvz9" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="registry-server" containerID="cri-o://fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4" gracePeriod=2 Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.694246 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.802732 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content\") pod \"7e622604-f7d1-4e1e-8e31-6643715e643b\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.802831 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities\") pod \"7e622604-f7d1-4e1e-8e31-6643715e643b\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.803546 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgwqk\" (UniqueName: \"kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk\") pod \"7e622604-f7d1-4e1e-8e31-6643715e643b\" (UID: \"7e622604-f7d1-4e1e-8e31-6643715e643b\") " Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.803694 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities" (OuterVolumeSpecName: "utilities") pod "7e622604-f7d1-4e1e-8e31-6643715e643b" (UID: "7e622604-f7d1-4e1e-8e31-6643715e643b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.804183 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.817262 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk" (OuterVolumeSpecName: "kube-api-access-jgwqk") pod "7e622604-f7d1-4e1e-8e31-6643715e643b" (UID: "7e622604-f7d1-4e1e-8e31-6643715e643b"). InnerVolumeSpecName "kube-api-access-jgwqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.905850 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgwqk\" (UniqueName: \"kubernetes.io/projected/7e622604-f7d1-4e1e-8e31-6643715e643b-kube-api-access-jgwqk\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:46 crc kubenswrapper[4679]: I0203 13:01:46.925284 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e622604-f7d1-4e1e-8e31-6643715e643b" (UID: "7e622604-f7d1-4e1e-8e31-6643715e643b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.007745 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e622604-f7d1-4e1e-8e31-6643715e643b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.173054 4679 generic.go:334] "Generic (PLEG): container finished" podID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerID="fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4" exitCode=0 Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.173106 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerDied","Data":"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4"} Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.173139 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkvz9" event={"ID":"7e622604-f7d1-4e1e-8e31-6643715e643b","Type":"ContainerDied","Data":"5187f900120e713de0dbfd3a3d4c70ce73c1c12eef22356104c0cb32db2398d6"} Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.173160 4679 scope.go:117] "RemoveContainer" containerID="fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.173187 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkvz9" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.203488 4679 scope.go:117] "RemoveContainer" containerID="374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.217611 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.224887 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dkvz9"] Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.236257 4679 scope.go:117] "RemoveContainer" containerID="84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.279787 4679 scope.go:117] "RemoveContainer" containerID="fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4" Feb 03 13:01:47 crc kubenswrapper[4679]: E0203 13:01:47.280296 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4\": container with ID starting with fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4 not found: ID does not exist" containerID="fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.280344 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4"} err="failed to get container status \"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4\": rpc error: code = NotFound desc = could not find container \"fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4\": container with ID starting with fce0194d16ef69858c72f11b5e3d44a46783eac9904b6630972d949a4f64abd4 not found: ID does not exist" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.280394 4679 scope.go:117] "RemoveContainer" containerID="374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e" Feb 03 13:01:47 crc kubenswrapper[4679]: E0203 13:01:47.280918 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e\": container with ID starting with 374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e not found: ID does not exist" containerID="374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.280950 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e"} err="failed to get container status \"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e\": rpc error: code = NotFound desc = could not find container \"374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e\": container with ID starting with 374ab32331a6d0c70d5a0f50e7976738ec06797f25e7319c2e6485b5087b021e not found: ID does not exist" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.280975 4679 scope.go:117] "RemoveContainer" containerID="84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695" Feb 03 13:01:47 crc kubenswrapper[4679]: E0203 13:01:47.281245 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695\": container with ID starting with 84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695 not found: ID does not exist" containerID="84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695" Feb 03 13:01:47 crc kubenswrapper[4679]: I0203 13:01:47.281279 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695"} err="failed to get container status \"84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695\": rpc error: code = NotFound desc = could not find container \"84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695\": container with ID starting with 84cc56faa27393a47aa01b10abbe9f37bfb65abd88749889f14bcdb246aea695 not found: ID does not exist" Feb 03 13:01:48 crc kubenswrapper[4679]: I0203 13:01:48.246596 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" path="/var/lib/kubelet/pods/7e622604-f7d1-4e1e-8e31-6643715e643b/volumes" Feb 03 13:01:58 crc kubenswrapper[4679]: I0203 13:01:58.217736 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:01:58 crc kubenswrapper[4679]: E0203 13:01:58.218732 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:02:12 crc kubenswrapper[4679]: I0203 13:02:12.212189 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:02:13 crc kubenswrapper[4679]: I0203 13:02:13.438649 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635"} Feb 03 13:02:30 crc kubenswrapper[4679]: I0203 13:02:30.610495 4679 generic.go:334] "Generic (PLEG): container finished" podID="a210a822-5111-45b8-9068-e745a7471962" containerID="000f0eac60af9a82525046facfa27c6e7795d92b33a1ab61127477a6a7952836" exitCode=0 Feb 03 13:02:30 crc kubenswrapper[4679]: I0203 13:02:30.610643 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a210a822-5111-45b8-9068-e745a7471962","Type":"ContainerDied","Data":"000f0eac60af9a82525046facfa27c6e7795d92b33a1ab61127477a6a7952836"} Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.051607 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121248 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121386 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121463 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121594 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121768 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121841 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.121950 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.122035 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.122229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.122846 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfczf\" (UniqueName: \"kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf\") pod \"a210a822-5111-45b8-9068-e745a7471962\" (UID: \"a210a822-5111-45b8-9068-e745a7471962\") " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.123233 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data" (OuterVolumeSpecName: "config-data") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.124215 4679 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.124266 4679 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.125335 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.128243 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf" (OuterVolumeSpecName: "kube-api-access-mfczf") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "kube-api-access-mfczf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.128715 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.154205 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.155153 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.161385 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.171743 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a210a822-5111-45b8-9068-e745a7471962" (UID: "a210a822-5111-45b8-9068-e745a7471962"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226087 4679 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a210a822-5111-45b8-9068-e745a7471962-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226127 4679 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226229 4679 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226242 4679 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226256 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfczf\" (UniqueName: \"kubernetes.io/projected/a210a822-5111-45b8-9068-e745a7471962-kube-api-access-mfczf\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226271 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a210a822-5111-45b8-9068-e745a7471962-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.226281 4679 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a210a822-5111-45b8-9068-e745a7471962-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.246587 4679 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.328241 4679 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.631542 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a210a822-5111-45b8-9068-e745a7471962","Type":"ContainerDied","Data":"ff9d435bfd1580f2e2af42573c51040b9dbd9378792398b934cea1017b30610b"} Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.631870 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9d435bfd1580f2e2af42573c51040b9dbd9378792398b934cea1017b30610b" Feb 03 13:02:32 crc kubenswrapper[4679]: I0203 13:02:32.631893 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.874102 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:02:37 crc kubenswrapper[4679]: E0203 13:02:37.874894 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="extract-content" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.874908 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="extract-content" Feb 03 13:02:37 crc kubenswrapper[4679]: E0203 13:02:37.874929 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="registry-server" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.874936 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="registry-server" Feb 03 13:02:37 crc kubenswrapper[4679]: E0203 13:02:37.874952 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="extract-utilities" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.874957 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="extract-utilities" Feb 03 13:02:37 crc kubenswrapper[4679]: E0203 13:02:37.874970 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a822-5111-45b8-9068-e745a7471962" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.874977 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a822-5111-45b8-9068-e745a7471962" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.875145 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e622604-f7d1-4e1e-8e31-6643715e643b" containerName="registry-server" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.875158 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a822-5111-45b8-9068-e745a7471962" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.876493 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.883610 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cj5p9" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.889696 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.936889 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:37 crc kubenswrapper[4679]: I0203 13:02:37.936958 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjcp\" (UniqueName: \"kubernetes.io/projected/322a6172-ec37-452f-b49c-15af3f777c8a-kube-api-access-njjcp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.039068 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.039188 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjcp\" (UniqueName: \"kubernetes.io/projected/322a6172-ec37-452f-b49c-15af3f777c8a-kube-api-access-njjcp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.039708 4679 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.060157 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjcp\" (UniqueName: \"kubernetes.io/projected/322a6172-ec37-452f-b49c-15af3f777c8a-kube-api-access-njjcp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.068645 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"322a6172-ec37-452f-b49c-15af3f777c8a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.213268 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.677821 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:02:38 crc kubenswrapper[4679]: I0203 13:02:38.679157 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:02:39 crc kubenswrapper[4679]: I0203 13:02:39.697984 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"322a6172-ec37-452f-b49c-15af3f777c8a","Type":"ContainerStarted","Data":"79176ecab4a00e6acae49fb87d1e04a005e0e0f4f9dfa6a87f376adb2bff820f"} Feb 03 13:02:40 crc kubenswrapper[4679]: I0203 13:02:40.710226 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"322a6172-ec37-452f-b49c-15af3f777c8a","Type":"ContainerStarted","Data":"041709e0fdc6849e6d551ef99341924f8fb19de9ebe819900b92d5e7e06a8fda"} Feb 03 13:02:40 crc kubenswrapper[4679]: I0203 13:02:40.737515 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.865430719 podStartE2EDuration="3.737489003s" podCreationTimestamp="2026-02-03 13:02:37 +0000 UTC" firstStartedPulling="2026-02-03 13:02:38.677449586 +0000 UTC m=+3431.152345714" lastFinishedPulling="2026-02-03 13:02:39.54950789 +0000 UTC m=+3432.024403998" observedRunningTime="2026-02-03 13:02:40.724198673 +0000 UTC m=+3433.199094821" watchObservedRunningTime="2026-02-03 13:02:40.737489003 +0000 UTC m=+3433.212385101" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.173109 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwpt/must-gather-ljl6m"] Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.175180 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.176952 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wjwpt"/"default-dockercfg-czqvk" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.177653 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wjwpt"/"openshift-service-ca.crt" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.178209 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wjwpt"/"kube-root-ca.crt" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.197795 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjwpt/must-gather-ljl6m"] Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.329923 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.330989 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsssp\" (UniqueName: \"kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.432836 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsssp\" (UniqueName: \"kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.432983 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.433352 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.450914 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsssp\" (UniqueName: \"kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp\") pod \"must-gather-ljl6m\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.493585 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:03:02 crc kubenswrapper[4679]: I0203 13:03:02.952961 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wjwpt/must-gather-ljl6m"] Feb 03 13:03:03 crc kubenswrapper[4679]: I0203 13:03:03.958730 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" event={"ID":"e3c5058d-1324-4403-9bff-a9e204ecfe46","Type":"ContainerStarted","Data":"fc995491a72fbfdf862feac83af37ac0ab5537d0e09c9643cac6915a03d14244"} Feb 03 13:03:09 crc kubenswrapper[4679]: I0203 13:03:09.005733 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" event={"ID":"e3c5058d-1324-4403-9bff-a9e204ecfe46","Type":"ContainerStarted","Data":"c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b"} Feb 03 13:03:09 crc kubenswrapper[4679]: I0203 13:03:09.006312 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" event={"ID":"e3c5058d-1324-4403-9bff-a9e204ecfe46","Type":"ContainerStarted","Data":"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d"} Feb 03 13:03:09 crc kubenswrapper[4679]: I0203 13:03:09.024401 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" podStartSLOduration=1.8221561099999999 podStartE2EDuration="7.024380533s" podCreationTimestamp="2026-02-03 13:03:02 +0000 UTC" firstStartedPulling="2026-02-03 13:03:02.969177283 +0000 UTC m=+3455.444073391" lastFinishedPulling="2026-02-03 13:03:08.171401736 +0000 UTC m=+3460.646297814" observedRunningTime="2026-02-03 13:03:09.01996606 +0000 UTC m=+3461.494862148" watchObservedRunningTime="2026-02-03 13:03:09.024380533 +0000 UTC m=+3461.499276641" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.074456 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-7wwbf"] Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.076488 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.257837 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.258266 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mz9r\" (UniqueName: \"kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.360081 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.360222 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mz9r\" (UniqueName: \"kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.360270 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.382544 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mz9r\" (UniqueName: \"kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r\") pod \"crc-debug-7wwbf\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: I0203 13:03:12.397549 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:03:12 crc kubenswrapper[4679]: W0203 13:03:12.425042 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29c680ae_e158_4656_ab1b_449fd2ba8b0c.slice/crio-90e21a2f46d0083de91edeafb47e5fe73cd35d1c2fe5a5a67d2f1c1477f732db WatchSource:0}: Error finding container 90e21a2f46d0083de91edeafb47e5fe73cd35d1c2fe5a5a67d2f1c1477f732db: Status 404 returned error can't find the container with id 90e21a2f46d0083de91edeafb47e5fe73cd35d1c2fe5a5a67d2f1c1477f732db Feb 03 13:03:13 crc kubenswrapper[4679]: I0203 13:03:13.038377 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" event={"ID":"29c680ae-e158-4656-ab1b-449fd2ba8b0c","Type":"ContainerStarted","Data":"90e21a2f46d0083de91edeafb47e5fe73cd35d1c2fe5a5a67d2f1c1477f732db"} Feb 03 13:03:24 crc kubenswrapper[4679]: I0203 13:03:24.128955 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" event={"ID":"29c680ae-e158-4656-ab1b-449fd2ba8b0c","Type":"ContainerStarted","Data":"be0427de18b3998d5de4b623cc103b89a0e485d27582b15520c25f74b0cc1dfc"} Feb 03 13:03:24 crc kubenswrapper[4679]: I0203 13:03:24.157040 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" podStartSLOduration=0.700264929 podStartE2EDuration="12.157019912s" podCreationTimestamp="2026-02-03 13:03:12 +0000 UTC" firstStartedPulling="2026-02-03 13:03:12.429255158 +0000 UTC m=+3464.904151246" lastFinishedPulling="2026-02-03 13:03:23.886010141 +0000 UTC m=+3476.360906229" observedRunningTime="2026-02-03 13:03:24.147397816 +0000 UTC m=+3476.622293894" watchObservedRunningTime="2026-02-03 13:03:24.157019912 +0000 UTC m=+3476.631916000" Feb 03 13:04:13 crc kubenswrapper[4679]: I0203 13:04:13.559811 4679 generic.go:334] "Generic (PLEG): container finished" podID="29c680ae-e158-4656-ab1b-449fd2ba8b0c" containerID="be0427de18b3998d5de4b623cc103b89a0e485d27582b15520c25f74b0cc1dfc" exitCode=0 Feb 03 13:04:13 crc kubenswrapper[4679]: I0203 13:04:13.559889 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" event={"ID":"29c680ae-e158-4656-ab1b-449fd2ba8b0c","Type":"ContainerDied","Data":"be0427de18b3998d5de4b623cc103b89a0e485d27582b15520c25f74b0cc1dfc"} Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.666169 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.700661 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-7wwbf"] Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.734268 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-7wwbf"] Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.832056 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host\") pod \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.832199 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mz9r\" (UniqueName: \"kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r\") pod \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\" (UID: \"29c680ae-e158-4656-ab1b-449fd2ba8b0c\") " Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.832217 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host" (OuterVolumeSpecName: "host") pod "29c680ae-e158-4656-ab1b-449fd2ba8b0c" (UID: "29c680ae-e158-4656-ab1b-449fd2ba8b0c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.832746 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29c680ae-e158-4656-ab1b-449fd2ba8b0c-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.838243 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r" (OuterVolumeSpecName: "kube-api-access-8mz9r") pod "29c680ae-e158-4656-ab1b-449fd2ba8b0c" (UID: "29c680ae-e158-4656-ab1b-449fd2ba8b0c"). InnerVolumeSpecName "kube-api-access-8mz9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:14 crc kubenswrapper[4679]: I0203 13:04:14.934689 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mz9r\" (UniqueName: \"kubernetes.io/projected/29c680ae-e158-4656-ab1b-449fd2ba8b0c-kube-api-access-8mz9r\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.579318 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e21a2f46d0083de91edeafb47e5fe73cd35d1c2fe5a5a67d2f1c1477f732db" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.579389 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-7wwbf" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.902169 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-vpmtp"] Feb 03 13:04:15 crc kubenswrapper[4679]: E0203 13:04:15.903238 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c680ae-e158-4656-ab1b-449fd2ba8b0c" containerName="container-00" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.903256 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c680ae-e158-4656-ab1b-449fd2ba8b0c" containerName="container-00" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.903539 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c680ae-e158-4656-ab1b-449fd2ba8b0c" containerName="container-00" Feb 03 13:04:15 crc kubenswrapper[4679]: I0203 13:04:15.904448 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.057080 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgqp\" (UniqueName: \"kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.057388 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.159466 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxgqp\" (UniqueName: \"kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.159576 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.159694 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.178285 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxgqp\" (UniqueName: \"kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp\") pod \"crc-debug-vpmtp\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.224514 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c680ae-e158-4656-ab1b-449fd2ba8b0c" path="/var/lib/kubelet/pods/29c680ae-e158-4656-ab1b-449fd2ba8b0c/volumes" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.229265 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.592429 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" event={"ID":"80979823-c162-43e4-af58-b0bcf4977c6e","Type":"ContainerStarted","Data":"ca8d5ab369de4ad57a742649ff3d79730b8ef0d65dd5ff5f610a03191292368e"} Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.592721 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" event={"ID":"80979823-c162-43e4-af58-b0bcf4977c6e","Type":"ContainerStarted","Data":"8a8fe907cdc29e38405fb7fcadf3e1efb0177c2707482857f875f98cbac0fad7"} Feb 03 13:04:16 crc kubenswrapper[4679]: I0203 13:04:16.612796 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" podStartSLOduration=1.6127751830000001 podStartE2EDuration="1.612775183s" podCreationTimestamp="2026-02-03 13:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:16.609648913 +0000 UTC m=+3529.084545001" watchObservedRunningTime="2026-02-03 13:04:16.612775183 +0000 UTC m=+3529.087671271" Feb 03 13:04:17 crc kubenswrapper[4679]: I0203 13:04:17.603753 4679 generic.go:334] "Generic (PLEG): container finished" podID="80979823-c162-43e4-af58-b0bcf4977c6e" containerID="ca8d5ab369de4ad57a742649ff3d79730b8ef0d65dd5ff5f610a03191292368e" exitCode=0 Feb 03 13:04:17 crc kubenswrapper[4679]: I0203 13:04:17.603795 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" event={"ID":"80979823-c162-43e4-af58-b0bcf4977c6e","Type":"ContainerDied","Data":"ca8d5ab369de4ad57a742649ff3d79730b8ef0d65dd5ff5f610a03191292368e"} Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.714629 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.771162 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-vpmtp"] Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.782981 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-vpmtp"] Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.810487 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxgqp\" (UniqueName: \"kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp\") pod \"80979823-c162-43e4-af58-b0bcf4977c6e\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.810557 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host\") pod \"80979823-c162-43e4-af58-b0bcf4977c6e\" (UID: \"80979823-c162-43e4-af58-b0bcf4977c6e\") " Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.810742 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host" (OuterVolumeSpecName: "host") pod "80979823-c162-43e4-af58-b0bcf4977c6e" (UID: "80979823-c162-43e4-af58-b0bcf4977c6e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.811131 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80979823-c162-43e4-af58-b0bcf4977c6e-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.816706 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp" (OuterVolumeSpecName: "kube-api-access-pxgqp") pod "80979823-c162-43e4-af58-b0bcf4977c6e" (UID: "80979823-c162-43e4-af58-b0bcf4977c6e"). InnerVolumeSpecName "kube-api-access-pxgqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:18 crc kubenswrapper[4679]: I0203 13:04:18.912576 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxgqp\" (UniqueName: \"kubernetes.io/projected/80979823-c162-43e4-af58-b0bcf4977c6e-kube-api-access-pxgqp\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.623717 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8fe907cdc29e38405fb7fcadf3e1efb0177c2707482857f875f98cbac0fad7" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.623751 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-vpmtp" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.923402 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-mqc46"] Feb 03 13:04:19 crc kubenswrapper[4679]: E0203 13:04:19.923893 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80979823-c162-43e4-af58-b0bcf4977c6e" containerName="container-00" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.923912 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="80979823-c162-43e4-af58-b0bcf4977c6e" containerName="container-00" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.924159 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="80979823-c162-43e4-af58-b0bcf4977c6e" containerName="container-00" Feb 03 13:04:19 crc kubenswrapper[4679]: I0203 13:04:19.924954 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.035250 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.035332 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgn9h\" (UniqueName: \"kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.137185 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgn9h\" (UniqueName: \"kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.137395 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.137463 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.156842 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgn9h\" (UniqueName: \"kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h\") pod \"crc-debug-mqc46\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.224985 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80979823-c162-43e4-af58-b0bcf4977c6e" path="/var/lib/kubelet/pods/80979823-c162-43e4-af58-b0bcf4977c6e/volumes" Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.247215 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:20 crc kubenswrapper[4679]: W0203 13:04:20.282396 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b991fd5_7596_4171_a662_395e29b10cc1.slice/crio-8c7882fa5ad777f3010ea9f479b3438c4000e9652fc9af287ef4aca11e802243 WatchSource:0}: Error finding container 8c7882fa5ad777f3010ea9f479b3438c4000e9652fc9af287ef4aca11e802243: Status 404 returned error can't find the container with id 8c7882fa5ad777f3010ea9f479b3438c4000e9652fc9af287ef4aca11e802243 Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.633458 4679 generic.go:334] "Generic (PLEG): container finished" podID="0b991fd5-7596-4171-a662-395e29b10cc1" containerID="6599bdf02fa21959cc87263fea83a47d7aa0f546f8f5c515dc2e93d3f04c05c7" exitCode=0 Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.633554 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" event={"ID":"0b991fd5-7596-4171-a662-395e29b10cc1","Type":"ContainerDied","Data":"6599bdf02fa21959cc87263fea83a47d7aa0f546f8f5c515dc2e93d3f04c05c7"} Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.633913 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" event={"ID":"0b991fd5-7596-4171-a662-395e29b10cc1","Type":"ContainerStarted","Data":"8c7882fa5ad777f3010ea9f479b3438c4000e9652fc9af287ef4aca11e802243"} Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.678494 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-mqc46"] Feb 03 13:04:20 crc kubenswrapper[4679]: I0203 13:04:20.687262 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwpt/crc-debug-mqc46"] Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.735106 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.870796 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgn9h\" (UniqueName: \"kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h\") pod \"0b991fd5-7596-4171-a662-395e29b10cc1\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.871311 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host\") pod \"0b991fd5-7596-4171-a662-395e29b10cc1\" (UID: \"0b991fd5-7596-4171-a662-395e29b10cc1\") " Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.871385 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host" (OuterVolumeSpecName: "host") pod "0b991fd5-7596-4171-a662-395e29b10cc1" (UID: "0b991fd5-7596-4171-a662-395e29b10cc1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.871853 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b991fd5-7596-4171-a662-395e29b10cc1-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.876676 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h" (OuterVolumeSpecName: "kube-api-access-cgn9h") pod "0b991fd5-7596-4171-a662-395e29b10cc1" (UID: "0b991fd5-7596-4171-a662-395e29b10cc1"). InnerVolumeSpecName "kube-api-access-cgn9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:21 crc kubenswrapper[4679]: I0203 13:04:21.973772 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgn9h\" (UniqueName: \"kubernetes.io/projected/0b991fd5-7596-4171-a662-395e29b10cc1-kube-api-access-cgn9h\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:22 crc kubenswrapper[4679]: I0203 13:04:22.222637 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b991fd5-7596-4171-a662-395e29b10cc1" path="/var/lib/kubelet/pods/0b991fd5-7596-4171-a662-395e29b10cc1/volumes" Feb 03 13:04:22 crc kubenswrapper[4679]: I0203 13:04:22.650197 4679 scope.go:117] "RemoveContainer" containerID="6599bdf02fa21959cc87263fea83a47d7aa0f546f8f5c515dc2e93d3f04c05c7" Feb 03 13:04:22 crc kubenswrapper[4679]: I0203 13:04:22.650232 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/crc-debug-mqc46" Feb 03 13:04:35 crc kubenswrapper[4679]: I0203 13:04:35.924633 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c4bcd4b-hkzh6_2aa77b26-ca52-4ef9-a1c2-68237a080e1b/barbican-api/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.213770 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c4bcd4b-hkzh6_2aa77b26-ca52-4ef9-a1c2-68237a080e1b/barbican-api-log/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.339864 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d4655f6d4-rdwtj_91c5b9c5-d4c7-4138-90de-ee51de9f7a5f/barbican-keystone-listener/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.343908 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d4655f6d4-rdwtj_91c5b9c5-d4c7-4138-90de-ee51de9f7a5f/barbican-keystone-listener-log/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.538421 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-789dd74f99-dtwb4_0786ef5c-404a-4c24-8188-d757082c1419/barbican-worker-log/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.579657 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-789dd74f99-dtwb4_0786ef5c-404a-4c24-8188-d757082c1419/barbican-worker/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.735538 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.735856 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.767628 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w_83eaca34-8d94-48a8-8e56-58db37e376ab/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.829069 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/ceilometer-central-agent/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.906715 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/ceilometer-notification-agent/0.log" Feb 03 13:04:36 crc kubenswrapper[4679]: I0203 13:04:36.957798 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/proxy-httpd/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.035459 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/sg-core/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.140431 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbe4378f-83bf-420b-b73a-185c57ab9771/cinder-api/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.183627 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbe4378f-83bf-420b-b73a-185c57ab9771/cinder-api-log/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.361533 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4882a26d-4240-46b5-917c-dc6842916963/cinder-scheduler/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.407097 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4882a26d-4240-46b5-917c-dc6842916963/probe/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.514677 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xmthc_21b7ed40-7c17-44bd-9ad8-f47f21ea4e84/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.618962 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4_4d6011df-83a4-4d86-ac66-61b00cd615d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.739270 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/init/0.log" Feb 03 13:04:37 crc kubenswrapper[4679]: I0203 13:04:37.992136 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9_aa12b60d-98f3-42a6-b429-cd451b1ec5fc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.003439 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/init/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.017885 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/dnsmasq-dns/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.204971 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b1d9c6da-29c6-43e7-92a6-ee0c5901c36b/glance-log/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.222885 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b1d9c6da-29c6-43e7-92a6-ee0c5901c36b/glance-httpd/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.417035 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1672261a-caab-4c72-9be3-78b40978e2cf/glance-log/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.426985 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1672261a-caab-4c72-9be3-78b40978e2cf/glance-httpd/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.582172 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74557bdb5d-lsfq8_a09ad5f1-6af1-452d-a08f-271579ecb3d1/horizon/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.825966 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh_bf811a1c-76b7-4b43-b658-d68388d38cb8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.956048 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-whhrf_2399747b-7fec-4916-8a58-13a53de36d78/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:38 crc kubenswrapper[4679]: I0203 13:04:38.973576 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74557bdb5d-lsfq8_a09ad5f1-6af1-452d-a08f-271579ecb3d1/horizon-log/0.log" Feb 03 13:04:39 crc kubenswrapper[4679]: I0203 13:04:39.282612 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29502061-wb5cr_23cead04-2ba2-47aa-8b2c-fe29c2a25fb3/keystone-cron/0.log" Feb 03 13:04:39 crc kubenswrapper[4679]: I0203 13:04:39.294242 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b6496f477-9vvrm_0d40e305-3fdf-4ce8-a586-7f2b9786e0eb/keystone-api/0.log" Feb 03 13:04:39 crc kubenswrapper[4679]: I0203 13:04:39.428853 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cba05e44-77a6-4a44-84c6-8bb482680662/kube-state-metrics/0.log" Feb 03 13:04:39 crc kubenswrapper[4679]: I0203 13:04:39.546238 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5_67eba320-30c8-4f6e-9958-f58ee00e9bdc/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:40 crc kubenswrapper[4679]: I0203 13:04:40.215299 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b98ff4cb5-lk4d9_fd629794-5ce3-4d07-9f6c-c0a85424379f/neutron-api/0.log" Feb 03 13:04:40 crc kubenswrapper[4679]: I0203 13:04:40.240084 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b98ff4cb5-lk4d9_fd629794-5ce3-4d07-9f6c-c0a85424379f/neutron-httpd/0.log" Feb 03 13:04:40 crc kubenswrapper[4679]: I0203 13:04:40.328312 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2_cc385278-b837-4f12-bd6f-5fdd89b02bd7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:40 crc kubenswrapper[4679]: I0203 13:04:40.903001 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e72b3e9b-a5ec-43f1-a286-43f2ce2f5240/nova-cell0-conductor-conductor/0.log" Feb 03 13:04:40 crc kubenswrapper[4679]: I0203 13:04:40.963271 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0144e14a-b09d-4182-8008-358b3032b05c/nova-api-log/0.log" Feb 03 13:04:41 crc kubenswrapper[4679]: I0203 13:04:41.089767 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0144e14a-b09d-4182-8008-358b3032b05c/nova-api-api/0.log" Feb 03 13:04:41 crc kubenswrapper[4679]: I0203 13:04:41.207751 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9/nova-cell1-conductor-conductor/0.log" Feb 03 13:04:41 crc kubenswrapper[4679]: I0203 13:04:41.245628 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bafb8aaf-7819-4978-aaae-7d26a4a126b6/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 13:04:41 crc kubenswrapper[4679]: I0203 13:04:41.508313 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-82h87_b00ee047-2435-41ba-b376-be13d8309d1f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:41 crc kubenswrapper[4679]: I0203 13:04:41.551151 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_43e2a214-af77-4834-9af8-6435c0cc24ba/nova-metadata-log/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.089979 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f3933651-b0cd-48e8-bcf4-b6ec20930d3b/nova-scheduler-scheduler/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.342413 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/mysql-bootstrap/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.552954 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/galera/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.593828 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/mysql-bootstrap/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.775289 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/mysql-bootstrap/0.log" Feb 03 13:04:42 crc kubenswrapper[4679]: I0203 13:04:42.830150 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_43e2a214-af77-4834-9af8-6435c0cc24ba/nova-metadata-metadata/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.043104 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/mysql-bootstrap/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.066579 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9015531f-675f-40b4-a643-94a33a87592b/openstackclient/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.070931 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/galera/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.253445 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5tt4c_c908c598-a229-467c-8430-de77205f95ec/ovn-controller/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.330090 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kmmd2_54fecd77-d186-4510-9e06-4ff67edee154/openstack-network-exporter/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.529974 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server-init/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.688089 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server-init/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.700867 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovs-vswitchd/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.784958 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server/0.log" Feb 03 13:04:43 crc kubenswrapper[4679]: I0203 13:04:43.959203 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mfmjt_cc05f31e-be8f-497a-ba7b-1f5c54d070c4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.016765 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4275cf53-917f-4b88-9832-b3f9da33b445/openstack-network-exporter/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.287104 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4275cf53-917f-4b88-9832-b3f9da33b445/ovn-northd/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.502274 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bf6e3dac-ec8b-422b-9459-3554f884594d/openstack-network-exporter/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.557397 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bf6e3dac-ec8b-422b-9459-3554f884594d/ovsdbserver-nb/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.771893 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea35f3b6-94df-45c5-9b94-af55636b7ad0/openstack-network-exporter/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.811341 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea35f3b6-94df-45c5-9b94-af55636b7ad0/ovsdbserver-sb/0.log" Feb 03 13:04:44 crc kubenswrapper[4679]: I0203 13:04:44.869091 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79464686c6-vwq7l_b242f52b-0a15-4493-9da2-15aca091df48/placement-api/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.085057 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79464686c6-vwq7l_b242f52b-0a15-4493-9da2-15aca091df48/placement-log/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.254477 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/setup-container/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.463417 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/setup-container/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.469440 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/rabbitmq/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.571921 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/setup-container/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.816425 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/setup-container/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.849399 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs_e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:45 crc kubenswrapper[4679]: I0203 13:04:45.880291 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/rabbitmq/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.078479 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5bzht_b709c6fe-9a41-44fd-9350-989aa43947da/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.159819 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s_6fc50826-5b8c-4973-bef7-78e861d37c96/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.306303 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lkc9z_e11ad6fe-5a94-4797-a827-ca1918e67f79/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.477526 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zkxjk_cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b/ssh-known-hosts-edpm-deployment/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.656326 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56b6b9b667-hn9mj_7034878f-0540-438b-b9b3-5e726c04e49c/proxy-server/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.747258 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56b6b9b667-hn9mj_7034878f-0540-438b-b9b3-5e726c04e49c/proxy-httpd/0.log" Feb 03 13:04:46 crc kubenswrapper[4679]: I0203 13:04:46.851008 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7mtlb_43821977-e5d9-4405-b6c6-d739a8fea389/swift-ring-rebalance/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.065885 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-auditor/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.089746 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-reaper/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.179700 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-replicator/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.439070 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-server/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.474686 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-auditor/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.522842 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-replicator/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.525093 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-server/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.659216 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-updater/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.765804 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-auditor/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.771454 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-replicator/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.805776 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-expirer/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.970178 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-server/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.987109 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-updater/0.log" Feb 03 13:04:47 crc kubenswrapper[4679]: I0203 13:04:47.990871 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/rsync/0.log" Feb 03 13:04:48 crc kubenswrapper[4679]: I0203 13:04:48.011100 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/swift-recon-cron/0.log" Feb 03 13:04:48 crc kubenswrapper[4679]: I0203 13:04:48.271181 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a210a822-5111-45b8-9068-e745a7471962/tempest-tests-tempest-tests-runner/0.log" Feb 03 13:04:48 crc kubenswrapper[4679]: I0203 13:04:48.294327 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq_fbcf4978-33e8-4444-b972-dd9859e52ec0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:48 crc kubenswrapper[4679]: I0203 13:04:48.579514 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_322a6172-ec37-452f-b49c-15af3f777c8a/test-operator-logs-container/0.log" Feb 03 13:04:48 crc kubenswrapper[4679]: I0203 13:04:48.616537 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz_3b077848-9e84-4914-83b8-d47ebe659982/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:04:54 crc kubenswrapper[4679]: I0203 13:04:54.864133 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9e899cda-42d0-40ae-a9c6-34f4bbad9fe7/memcached/0.log" Feb 03 13:05:06 crc kubenswrapper[4679]: I0203 13:05:06.735999 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:06 crc kubenswrapper[4679]: I0203 13:05:06.736603 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:15 crc kubenswrapper[4679]: I0203 13:05:15.645452 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:05:15 crc kubenswrapper[4679]: I0203 13:05:15.868120 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:05:15 crc kubenswrapper[4679]: I0203 13:05:15.917765 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:05:15 crc kubenswrapper[4679]: I0203 13:05:15.918204 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.104143 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.138031 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/extract/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.151819 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.421407 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-xhb56_d39a188d-08b7-4670-a5da-c65da1b30936/manager/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.424190 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6l9l6_3722274c-5a6f-49ef-89ac-06fc5afd3098/manager/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.612185 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-x9kws_d96d5316-a678-427e-aa6f-a606876142d3/manager/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.726938 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-vgbcs_e92384fd-2d3b-4ba9-b265-92dbc9941750/manager/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.837703 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k5gcz_9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4/manager/0.log" Feb 03 13:05:16 crc kubenswrapper[4679]: I0203 13:05:16.893432 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7p976_ee886e3f-df4d-43e4-b1ad-8eec77ead216/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.125100 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-pwgd6_36b08aa8-071f-4862-821c-9ee85afcdf8e/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.282017 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vgg4d_2de6e912-5456-4209-85d7-2bddcedc0384/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.393440 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-h77pz_b498e6cd-6f07-461f-bf7a-5842461cbbbe/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.471906 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-m6jbm_ee3e0d19-7d26-4e63-8859-f1a2596a0ba5/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.597189 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-nvx58_a0fa5212-9380-4d21-a8ae-a400eb674de3/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.769313 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-8gc44_6d552366-fc97-4365-8abd-5b32b28a09b2/manager/0.log" Feb 03 13:05:17 crc kubenswrapper[4679]: I0203 13:05:17.892761 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bllmz_79b06c14-7e75-4306-8001-3217809de327/manager/0.log" Feb 03 13:05:18 crc kubenswrapper[4679]: I0203 13:05:18.022331 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-4pvxk_e25213d7-4c75-46b8-b39b-44e75557c434/manager/0.log" Feb 03 13:05:18 crc kubenswrapper[4679]: I0203 13:05:18.113441 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x_11b2dd9f-a9fc-427c-a2a2-744484f359b4/manager/0.log" Feb 03 13:05:18 crc kubenswrapper[4679]: I0203 13:05:18.530430 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248q5_cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a/registry-server/0.log" Feb 03 13:05:18 crc kubenswrapper[4679]: I0203 13:05:18.533230 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68c5f5659f-77cqz_52069189-49bf-46cc-b13d-b7705a4e68f1/operator/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.036862 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-42stf_1f76a687-e27f-4d78-aeea-c2faca503549/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.216731 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-46lw2_6b1f821d-79a5-4fe4-bc8a-f850716781e7/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.299762 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ddt7p_ebf666dd-6b96-4907-8024-800d9634590f/operator/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.534418 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxkt7_35892343-44c5-4cfb-9061-0b0542d23b99/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.564236 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-599dbc9849-9t5wf_632ab40b-9540-48ad-b1c7-7b5b1603e4d2/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.710793 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-wktnn_8e3f82d2-bf0a-4203-80af-3b48711ad1f0/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.774243 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-89mxg_dedc1caa-ae76-49df-818b-49e570c09a31/manager/0.log" Feb 03 13:05:19 crc kubenswrapper[4679]: I0203 13:05:19.925949 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-w6xc9_3f6911aa-e91a-4ab6-b2cd-0c1a08977a57/manager/0.log" Feb 03 13:05:36 crc kubenswrapper[4679]: I0203 13:05:36.735302 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:36 crc kubenswrapper[4679]: I0203 13:05:36.735952 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:36 crc kubenswrapper[4679]: I0203 13:05:36.736009 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 13:05:36 crc kubenswrapper[4679]: I0203 13:05:36.736999 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:05:36 crc kubenswrapper[4679]: I0203 13:05:36.737068 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635" gracePeriod=600 Feb 03 13:05:37 crc kubenswrapper[4679]: I0203 13:05:37.349427 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635" exitCode=0 Feb 03 13:05:37 crc kubenswrapper[4679]: I0203 13:05:37.349484 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635"} Feb 03 13:05:37 crc kubenswrapper[4679]: I0203 13:05:37.349796 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d"} Feb 03 13:05:37 crc kubenswrapper[4679]: I0203 13:05:37.349821 4679 scope.go:117] "RemoveContainer" containerID="0397718cef45f22b1d6debf94d1787759275c3f1b486d62ec696b5562fea4a91" Feb 03 13:05:39 crc kubenswrapper[4679]: I0203 13:05:39.390647 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mlccb_00b9ca4d-dce2-4baa-b9ce-0eda632507e7/control-plane-machine-set-operator/0.log" Feb 03 13:05:39 crc kubenswrapper[4679]: I0203 13:05:39.535490 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cshmm_75e7133e-70dc-4896-bac7-d159e39737c1/kube-rbac-proxy/0.log" Feb 03 13:05:39 crc kubenswrapper[4679]: I0203 13:05:39.567496 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cshmm_75e7133e-70dc-4896-bac7-d159e39737c1/machine-api-operator/0.log" Feb 03 13:05:51 crc kubenswrapper[4679]: I0203 13:05:51.848932 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-j5c2m_dc20b5f8-7353-4785-ac36-1f263f60b102/cert-manager-controller/0.log" Feb 03 13:05:52 crc kubenswrapper[4679]: I0203 13:05:52.034759 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-m6rvf_1934112b-b7de-4e8a-a94c-696e9a9412cd/cert-manager-cainjector/0.log" Feb 03 13:05:52 crc kubenswrapper[4679]: I0203 13:05:52.118001 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xxlm9_bcb6f977-4961-473e-afe5-be2b055270e6/cert-manager-webhook/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.335325 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wgq45_ee8ef129-bd8a-4296-9ac9-8bad21434ec6/nmstate-console-plugin/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.543096 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-s6vrk_bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4/nmstate-handler/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.564425 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-n5w4n_5d978da5-5322-40f9-a7ea-c7dd2295874f/kube-rbac-proxy/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.661208 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-n5w4n_5d978da5-5322-40f9-a7ea-c7dd2295874f/nmstate-metrics/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.748212 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l95cn_ad475107-250b-403a-8563-b90f107e4f89/nmstate-operator/0.log" Feb 03 13:06:04 crc kubenswrapper[4679]: I0203 13:06:04.861429 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5fck4_4a9f66b5-a4ee-40b5-95cf-159557632d17/nmstate-webhook/0.log" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.517400 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:16 crc kubenswrapper[4679]: E0203 13:06:16.519528 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b991fd5-7596-4171-a662-395e29b10cc1" containerName="container-00" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.520226 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b991fd5-7596-4171-a662-395e29b10cc1" containerName="container-00" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.520599 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b991fd5-7596-4171-a662-395e29b10cc1" containerName="container-00" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.522500 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.567955 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.588785 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.588904 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.589014 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66kxb\" (UniqueName: \"kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.690487 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66kxb\" (UniqueName: \"kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.690726 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.690826 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.691294 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.691337 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.710745 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66kxb\" (UniqueName: \"kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb\") pod \"community-operators-bmp67\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:16 crc kubenswrapper[4679]: I0203 13:06:16.909608 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:17 crc kubenswrapper[4679]: I0203 13:06:17.485190 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:17 crc kubenswrapper[4679]: I0203 13:06:17.737974 4679 generic.go:334] "Generic (PLEG): container finished" podID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerID="db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0" exitCode=0 Feb 03 13:06:17 crc kubenswrapper[4679]: I0203 13:06:17.739905 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerDied","Data":"db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0"} Feb 03 13:06:17 crc kubenswrapper[4679]: I0203 13:06:17.739971 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerStarted","Data":"1c1c9ddc90994f919d2b7343dd2d58b2958ce5c09d224946547a7c6cbfac7988"} Feb 03 13:06:18 crc kubenswrapper[4679]: I0203 13:06:18.751315 4679 generic.go:334] "Generic (PLEG): container finished" podID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerID="781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2" exitCode=0 Feb 03 13:06:18 crc kubenswrapper[4679]: I0203 13:06:18.751624 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerDied","Data":"781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2"} Feb 03 13:06:20 crc kubenswrapper[4679]: I0203 13:06:20.781005 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerStarted","Data":"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc"} Feb 03 13:06:20 crc kubenswrapper[4679]: I0203 13:06:20.832767 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bmp67" podStartSLOduration=2.670664607 podStartE2EDuration="4.832740909s" podCreationTimestamp="2026-02-03 13:06:16 +0000 UTC" firstStartedPulling="2026-02-03 13:06:17.740036257 +0000 UTC m=+3650.214932345" lastFinishedPulling="2026-02-03 13:06:19.902112539 +0000 UTC m=+3652.377008647" observedRunningTime="2026-02-03 13:06:20.824844167 +0000 UTC m=+3653.299740265" watchObservedRunningTime="2026-02-03 13:06:20.832740909 +0000 UTC m=+3653.307636997" Feb 03 13:06:26 crc kubenswrapper[4679]: I0203 13:06:26.910249 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:26 crc kubenswrapper[4679]: I0203 13:06:26.910912 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:26 crc kubenswrapper[4679]: I0203 13:06:26.967253 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:27 crc kubenswrapper[4679]: I0203 13:06:27.878719 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:27 crc kubenswrapper[4679]: I0203 13:06:27.925809 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:29 crc kubenswrapper[4679]: I0203 13:06:29.852466 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bmp67" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="registry-server" containerID="cri-o://1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc" gracePeriod=2 Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.355490 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.476165 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66kxb\" (UniqueName: \"kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb\") pod \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.476397 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content\") pod \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.476433 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities\") pod \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\" (UID: \"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374\") " Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.477196 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities" (OuterVolumeSpecName: "utilities") pod "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" (UID: "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.487695 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb" (OuterVolumeSpecName: "kube-api-access-66kxb") pod "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" (UID: "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374"). InnerVolumeSpecName "kube-api-access-66kxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.539927 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" (UID: "1f517cd6-0d85-4fe3-a26d-e8bd4ec72374"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.579585 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66kxb\" (UniqueName: \"kubernetes.io/projected/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-kube-api-access-66kxb\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.579630 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.579641 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.863708 4679 generic.go:334] "Generic (PLEG): container finished" podID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerID="1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc" exitCode=0 Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.863774 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmp67" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.863812 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerDied","Data":"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc"} Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.864161 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmp67" event={"ID":"1f517cd6-0d85-4fe3-a26d-e8bd4ec72374","Type":"ContainerDied","Data":"1c1c9ddc90994f919d2b7343dd2d58b2958ce5c09d224946547a7c6cbfac7988"} Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.864187 4679 scope.go:117] "RemoveContainer" containerID="1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.893391 4679 scope.go:117] "RemoveContainer" containerID="781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.910700 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.921204 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bmp67"] Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.929548 4679 scope.go:117] "RemoveContainer" containerID="db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.965752 4679 scope.go:117] "RemoveContainer" containerID="1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc" Feb 03 13:06:30 crc kubenswrapper[4679]: E0203 13:06:30.966350 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc\": container with ID starting with 1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc not found: ID does not exist" containerID="1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.966462 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc"} err="failed to get container status \"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc\": rpc error: code = NotFound desc = could not find container \"1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc\": container with ID starting with 1867cd082a9eab50608d1c9ceba1627912f5e41c02438fca4b5e4759611d2fbc not found: ID does not exist" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.966498 4679 scope.go:117] "RemoveContainer" containerID="781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2" Feb 03 13:06:30 crc kubenswrapper[4679]: E0203 13:06:30.967055 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2\": container with ID starting with 781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2 not found: ID does not exist" containerID="781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.967094 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2"} err="failed to get container status \"781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2\": rpc error: code = NotFound desc = could not find container \"781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2\": container with ID starting with 781005c35cc3d06e46d11914fd7fdde05374e976437546bae91238626f0954b2 not found: ID does not exist" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.967126 4679 scope.go:117] "RemoveContainer" containerID="db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0" Feb 03 13:06:30 crc kubenswrapper[4679]: E0203 13:06:30.967503 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0\": container with ID starting with db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0 not found: ID does not exist" containerID="db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0" Feb 03 13:06:30 crc kubenswrapper[4679]: I0203 13:06:30.967561 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0"} err="failed to get container status \"db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0\": rpc error: code = NotFound desc = could not find container \"db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0\": container with ID starting with db8688863961fbb7e6704677ce607be3b7c9e234060b6ade4cd2aa30b1588ed0 not found: ID does not exist" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.405607 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sd924_8f94f678-3ab0-4078-b6ad-361e9326083c/kube-rbac-proxy/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.485230 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sd924_8f94f678-3ab0-4078-b6ad-361e9326083c/controller/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.589448 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.824511 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.840558 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.848860 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:06:31 crc kubenswrapper[4679]: I0203 13:06:31.882614 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.075487 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.081437 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.100279 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.106587 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.222319 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" path="/var/lib/kubelet/pods/1f517cd6-0d85-4fe3-a26d-e8bd4ec72374/volumes" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.270385 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.295267 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.303900 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.330154 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/controller/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.511953 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/frr-metrics/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.513756 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/kube-rbac-proxy/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.537207 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/kube-rbac-proxy-frr/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.694920 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/reloader/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.782165 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-w9d6c_97eb643e-6db5-4612-acbf-eef52bbd1cba/frr-k8s-webhook-server/0.log" Feb 03 13:06:32 crc kubenswrapper[4679]: I0203 13:06:32.971390 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-696d65d798-4rvqz_2b8aafdc-129f-420c-a901-fa59576bf426/manager/0.log" Feb 03 13:06:33 crc kubenswrapper[4679]: I0203 13:06:33.185669 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-72kj8_7c2c7dcb-cc91-4794-baaf-f766c8e7cd55/kube-rbac-proxy/0.log" Feb 03 13:06:33 crc kubenswrapper[4679]: I0203 13:06:33.216291 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b48565759-btpsb_8e1b318f-e557-49ba-91c9-3489ccb19246/webhook-server/0.log" Feb 03 13:06:33 crc kubenswrapper[4679]: I0203 13:06:33.958090 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-72kj8_7c2c7dcb-cc91-4794-baaf-f766c8e7cd55/speaker/0.log" Feb 03 13:06:34 crc kubenswrapper[4679]: I0203 13:06:34.007552 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/frr/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.563668 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.748156 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.757950 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.796734 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.949963 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:06:46 crc kubenswrapper[4679]: I0203 13:06:46.973964 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.030701 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/extract/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.129482 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.309298 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.317764 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.317934 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.548728 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/extract/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.557098 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.557402 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.777911 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.885742 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.951014 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:06:47 crc kubenswrapper[4679]: I0203 13:06:47.953395 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.119322 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.151211 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.324022 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.555385 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.584900 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.644711 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.658479 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/registry-server/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.784897 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:06:48 crc kubenswrapper[4679]: I0203 13:06:48.832078 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.064158 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sbtmr_6d1001e8-7956-4d94-aed4-c482940134f4/marketplace-operator/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.127231 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.342496 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.376248 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.436251 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.520935 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/registry-server/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.654837 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.685095 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.839009 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/registry-server/0.log" Feb 03 13:06:49 crc kubenswrapper[4679]: I0203 13:06:49.880420 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-utilities/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.073742 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-content/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.090966 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-content/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.139725 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-utilities/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.322239 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-content/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.368590 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/extract-utilities/0.log" Feb 03 13:06:50 crc kubenswrapper[4679]: I0203 13:06:50.910890 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rs7cm_330ffce1-de6e-4402-8bb5-52976082c21e/registry-server/0.log" Feb 03 13:07:06 crc kubenswrapper[4679]: E0203 13:07:06.829811 4679 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.18:33924->38.129.56.18:46839: read tcp 38.129.56.18:33924->38.129.56.18:46839: read: connection reset by peer Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.652739 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:03 crc kubenswrapper[4679]: E0203 13:08:03.653573 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="registry-server" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.653605 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="registry-server" Feb 03 13:08:03 crc kubenswrapper[4679]: E0203 13:08:03.653623 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="extract-utilities" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.653631 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="extract-utilities" Feb 03 13:08:03 crc kubenswrapper[4679]: E0203 13:08:03.653710 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="extract-content" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.653716 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="extract-content" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.653950 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f517cd6-0d85-4fe3-a26d-e8bd4ec72374" containerName="registry-server" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.669571 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.675650 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.813637 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8stb\" (UniqueName: \"kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.813750 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.813793 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.915944 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8stb\" (UniqueName: \"kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.916072 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.916110 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.916488 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.916598 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.936298 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8stb\" (UniqueName: \"kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb\") pod \"certified-operators-27lns\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:03 crc kubenswrapper[4679]: I0203 13:08:03.996665 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:04 crc kubenswrapper[4679]: I0203 13:08:04.603538 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:04 crc kubenswrapper[4679]: I0203 13:08:04.783498 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerStarted","Data":"48b20d7c48a15a7b80d07c961d25e2aa0f49edb1ff7e1b209d92a961f575b67c"} Feb 03 13:08:05 crc kubenswrapper[4679]: I0203 13:08:05.795820 4679 generic.go:334] "Generic (PLEG): container finished" podID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerID="74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf" exitCode=0 Feb 03 13:08:05 crc kubenswrapper[4679]: I0203 13:08:05.795927 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerDied","Data":"74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf"} Feb 03 13:08:05 crc kubenswrapper[4679]: I0203 13:08:05.797890 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:08:06 crc kubenswrapper[4679]: I0203 13:08:06.735912 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:08:06 crc kubenswrapper[4679]: I0203 13:08:06.736279 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:08:07 crc kubenswrapper[4679]: I0203 13:08:07.820830 4679 generic.go:334] "Generic (PLEG): container finished" podID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerID="d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381" exitCode=0 Feb 03 13:08:07 crc kubenswrapper[4679]: I0203 13:08:07.820998 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerDied","Data":"d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381"} Feb 03 13:08:08 crc kubenswrapper[4679]: I0203 13:08:08.833682 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerStarted","Data":"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22"} Feb 03 13:08:08 crc kubenswrapper[4679]: I0203 13:08:08.858670 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-27lns" podStartSLOduration=3.233973925 podStartE2EDuration="5.858650665s" podCreationTimestamp="2026-02-03 13:08:03 +0000 UTC" firstStartedPulling="2026-02-03 13:08:05.797656388 +0000 UTC m=+3758.272552476" lastFinishedPulling="2026-02-03 13:08:08.422333128 +0000 UTC m=+3760.897229216" observedRunningTime="2026-02-03 13:08:08.849213101 +0000 UTC m=+3761.324109179" watchObservedRunningTime="2026-02-03 13:08:08.858650665 +0000 UTC m=+3761.333546753" Feb 03 13:08:13 crc kubenswrapper[4679]: I0203 13:08:13.997115 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:13 crc kubenswrapper[4679]: I0203 13:08:13.997767 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:14 crc kubenswrapper[4679]: I0203 13:08:14.051661 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:14 crc kubenswrapper[4679]: I0203 13:08:14.936654 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:14 crc kubenswrapper[4679]: I0203 13:08:14.982348 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:16 crc kubenswrapper[4679]: I0203 13:08:16.914023 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27lns" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="registry-server" containerID="cri-o://71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22" gracePeriod=2 Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.428404 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.494068 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8stb\" (UniqueName: \"kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb\") pod \"68a26ec6-63c5-47cb-9613-03824f4441ff\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.494178 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content\") pod \"68a26ec6-63c5-47cb-9613-03824f4441ff\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.494530 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities\") pod \"68a26ec6-63c5-47cb-9613-03824f4441ff\" (UID: \"68a26ec6-63c5-47cb-9613-03824f4441ff\") " Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.495435 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities" (OuterVolumeSpecName: "utilities") pod "68a26ec6-63c5-47cb-9613-03824f4441ff" (UID: "68a26ec6-63c5-47cb-9613-03824f4441ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.500401 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb" (OuterVolumeSpecName: "kube-api-access-g8stb") pod "68a26ec6-63c5-47cb-9613-03824f4441ff" (UID: "68a26ec6-63c5-47cb-9613-03824f4441ff"). InnerVolumeSpecName "kube-api-access-g8stb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.545303 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68a26ec6-63c5-47cb-9613-03824f4441ff" (UID: "68a26ec6-63c5-47cb-9613-03824f4441ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.596792 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8stb\" (UniqueName: \"kubernetes.io/projected/68a26ec6-63c5-47cb-9613-03824f4441ff-kube-api-access-g8stb\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.596848 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.596863 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a26ec6-63c5-47cb-9613-03824f4441ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.927047 4679 generic.go:334] "Generic (PLEG): container finished" podID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerID="71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22" exitCode=0 Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.927136 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27lns" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.927142 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerDied","Data":"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22"} Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.928556 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27lns" event={"ID":"68a26ec6-63c5-47cb-9613-03824f4441ff","Type":"ContainerDied","Data":"48b20d7c48a15a7b80d07c961d25e2aa0f49edb1ff7e1b209d92a961f575b67c"} Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.928576 4679 scope.go:117] "RemoveContainer" containerID="71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.973113 4679 scope.go:117] "RemoveContainer" containerID="d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381" Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.980204 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:17 crc kubenswrapper[4679]: I0203 13:08:17.991181 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27lns"] Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.001894 4679 scope.go:117] "RemoveContainer" containerID="74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.050138 4679 scope.go:117] "RemoveContainer" containerID="71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22" Feb 03 13:08:18 crc kubenswrapper[4679]: E0203 13:08:18.050806 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22\": container with ID starting with 71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22 not found: ID does not exist" containerID="71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.050846 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22"} err="failed to get container status \"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22\": rpc error: code = NotFound desc = could not find container \"71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22\": container with ID starting with 71bb0d9f534dde0a8e4902615d8b9b7d25f9fbe8b11d2848d8e4b5fd568a7b22 not found: ID does not exist" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.050872 4679 scope.go:117] "RemoveContainer" containerID="d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381" Feb 03 13:08:18 crc kubenswrapper[4679]: E0203 13:08:18.051222 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381\": container with ID starting with d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381 not found: ID does not exist" containerID="d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.051258 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381"} err="failed to get container status \"d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381\": rpc error: code = NotFound desc = could not find container \"d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381\": container with ID starting with d9ccde9c9aa547d366cdeab6a1f88484b1ab6d56a0cd66ce0f0266a922b99381 not found: ID does not exist" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.051286 4679 scope.go:117] "RemoveContainer" containerID="74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf" Feb 03 13:08:18 crc kubenswrapper[4679]: E0203 13:08:18.051651 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf\": container with ID starting with 74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf not found: ID does not exist" containerID="74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.051701 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf"} err="failed to get container status \"74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf\": rpc error: code = NotFound desc = could not find container \"74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf\": container with ID starting with 74de561b9b6e693ce68ab442218de07638f662db4b593fa5744168d3db549cdf not found: ID does not exist" Feb 03 13:08:18 crc kubenswrapper[4679]: I0203 13:08:18.226062 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" path="/var/lib/kubelet/pods/68a26ec6-63c5-47cb-9613-03824f4441ff/volumes" Feb 03 13:08:36 crc kubenswrapper[4679]: I0203 13:08:36.736274 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:08:36 crc kubenswrapper[4679]: I0203 13:08:36.736880 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:08:39 crc kubenswrapper[4679]: I0203 13:08:39.139425 4679 generic.go:334] "Generic (PLEG): container finished" podID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerID="1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d" exitCode=0 Feb 03 13:08:39 crc kubenswrapper[4679]: I0203 13:08:39.139523 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" event={"ID":"e3c5058d-1324-4403-9bff-a9e204ecfe46","Type":"ContainerDied","Data":"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d"} Feb 03 13:08:39 crc kubenswrapper[4679]: I0203 13:08:39.140310 4679 scope.go:117] "RemoveContainer" containerID="1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d" Feb 03 13:08:39 crc kubenswrapper[4679]: I0203 13:08:39.417497 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwpt_must-gather-ljl6m_e3c5058d-1324-4403-9bff-a9e204ecfe46/gather/0.log" Feb 03 13:08:47 crc kubenswrapper[4679]: I0203 13:08:47.670417 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wjwpt/must-gather-ljl6m"] Feb 03 13:08:47 crc kubenswrapper[4679]: I0203 13:08:47.671894 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="copy" containerID="cri-o://c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b" gracePeriod=2 Feb 03 13:08:47 crc kubenswrapper[4679]: I0203 13:08:47.680834 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wjwpt/must-gather-ljl6m"] Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.202208 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwpt_must-gather-ljl6m_e3c5058d-1324-4403-9bff-a9e204ecfe46/copy/0.log" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.203182 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.218728 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wjwpt_must-gather-ljl6m_e3c5058d-1324-4403-9bff-a9e204ecfe46/copy/0.log" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.219185 4679 generic.go:334] "Generic (PLEG): container finished" podID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerID="c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b" exitCode=143 Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.219296 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wjwpt/must-gather-ljl6m" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.228253 4679 scope.go:117] "RemoveContainer" containerID="c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.255830 4679 scope.go:117] "RemoveContainer" containerID="1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.273747 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsssp\" (UniqueName: \"kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp\") pod \"e3c5058d-1324-4403-9bff-a9e204ecfe46\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.273912 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output\") pod \"e3c5058d-1324-4403-9bff-a9e204ecfe46\" (UID: \"e3c5058d-1324-4403-9bff-a9e204ecfe46\") " Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.284046 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp" (OuterVolumeSpecName: "kube-api-access-gsssp") pod "e3c5058d-1324-4403-9bff-a9e204ecfe46" (UID: "e3c5058d-1324-4403-9bff-a9e204ecfe46"). InnerVolumeSpecName "kube-api-access-gsssp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.370612 4679 scope.go:117] "RemoveContainer" containerID="c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b" Feb 03 13:08:48 crc kubenswrapper[4679]: E0203 13:08:48.371152 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b\": container with ID starting with c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b not found: ID does not exist" containerID="c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.371191 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b"} err="failed to get container status \"c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b\": rpc error: code = NotFound desc = could not find container \"c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b\": container with ID starting with c87c20451b73633085845d41bf7e56b40b19af8b2107709da5069833bc79ee0b not found: ID does not exist" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.371224 4679 scope.go:117] "RemoveContainer" containerID="1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d" Feb 03 13:08:48 crc kubenswrapper[4679]: E0203 13:08:48.371604 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d\": container with ID starting with 1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d not found: ID does not exist" containerID="1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.371628 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d"} err="failed to get container status \"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d\": rpc error: code = NotFound desc = could not find container \"1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d\": container with ID starting with 1385e5c5e4f5c294d33cfee0e6f2516f01f694a4ed4620568509a114b66ed27d not found: ID does not exist" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.376254 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsssp\" (UniqueName: \"kubernetes.io/projected/e3c5058d-1324-4403-9bff-a9e204ecfe46-kube-api-access-gsssp\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.458239 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e3c5058d-1324-4403-9bff-a9e204ecfe46" (UID: "e3c5058d-1324-4403-9bff-a9e204ecfe46"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:48 crc kubenswrapper[4679]: I0203 13:08:48.478162 4679 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3c5058d-1324-4403-9bff-a9e204ecfe46-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:50 crc kubenswrapper[4679]: I0203 13:08:50.221911 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" path="/var/lib/kubelet/pods/e3c5058d-1324-4403-9bff-a9e204ecfe46/volumes" Feb 03 13:09:06 crc kubenswrapper[4679]: I0203 13:09:06.735429 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:09:06 crc kubenswrapper[4679]: I0203 13:09:06.736031 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:09:06 crc kubenswrapper[4679]: I0203 13:09:06.736084 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 13:09:06 crc kubenswrapper[4679]: I0203 13:09:06.736942 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:09:06 crc kubenswrapper[4679]: I0203 13:09:06.737000 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" gracePeriod=600 Feb 03 13:09:06 crc kubenswrapper[4679]: E0203 13:09:06.874525 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:09:07 crc kubenswrapper[4679]: I0203 13:09:07.393270 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" exitCode=0 Feb 03 13:09:07 crc kubenswrapper[4679]: I0203 13:09:07.393319 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d"} Feb 03 13:09:07 crc kubenswrapper[4679]: I0203 13:09:07.393411 4679 scope.go:117] "RemoveContainer" containerID="4ab0f426c2e0d818889c27b1eb5af1a16d949987dadc37525fac0b2b65e00635" Feb 03 13:09:07 crc kubenswrapper[4679]: I0203 13:09:07.393934 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:09:07 crc kubenswrapper[4679]: E0203 13:09:07.394302 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:09:22 crc kubenswrapper[4679]: I0203 13:09:22.212217 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:09:22 crc kubenswrapper[4679]: E0203 13:09:22.213004 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:09:36 crc kubenswrapper[4679]: I0203 13:09:36.058653 4679 scope.go:117] "RemoveContainer" containerID="be0427de18b3998d5de4b623cc103b89a0e485d27582b15520c25f74b0cc1dfc" Feb 03 13:09:36 crc kubenswrapper[4679]: I0203 13:09:36.213406 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:09:36 crc kubenswrapper[4679]: E0203 13:09:36.213683 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:09:50 crc kubenswrapper[4679]: I0203 13:09:50.211437 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:09:50 crc kubenswrapper[4679]: E0203 13:09:50.212577 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:10:01 crc kubenswrapper[4679]: I0203 13:10:01.211616 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:10:01 crc kubenswrapper[4679]: E0203 13:10:01.213587 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:10:14 crc kubenswrapper[4679]: I0203 13:10:14.212007 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:10:14 crc kubenswrapper[4679]: E0203 13:10:14.212941 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:10:29 crc kubenswrapper[4679]: I0203 13:10:29.212054 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:10:29 crc kubenswrapper[4679]: E0203 13:10:29.213079 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:10:36 crc kubenswrapper[4679]: I0203 13:10:36.135931 4679 scope.go:117] "RemoveContainer" containerID="ca8d5ab369de4ad57a742649ff3d79730b8ef0d65dd5ff5f610a03191292368e" Feb 03 13:10:42 crc kubenswrapper[4679]: I0203 13:10:42.212592 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:10:42 crc kubenswrapper[4679]: E0203 13:10:42.213441 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:10:56 crc kubenswrapper[4679]: I0203 13:10:56.213012 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:10:56 crc kubenswrapper[4679]: E0203 13:10:56.214577 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:08 crc kubenswrapper[4679]: I0203 13:11:08.217561 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:11:08 crc kubenswrapper[4679]: E0203 13:11:08.218269 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:21 crc kubenswrapper[4679]: I0203 13:11:21.212315 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:11:21 crc kubenswrapper[4679]: E0203 13:11:21.213101 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.403384 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfpww/must-gather-nf8c2"] Feb 03 13:11:27 crc kubenswrapper[4679]: E0203 13:11:27.404473 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="extract-utilities" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404496 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="extract-utilities" Feb 03 13:11:27 crc kubenswrapper[4679]: E0203 13:11:27.404523 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="gather" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404535 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="gather" Feb 03 13:11:27 crc kubenswrapper[4679]: E0203 13:11:27.404561 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="copy" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404570 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="copy" Feb 03 13:11:27 crc kubenswrapper[4679]: E0203 13:11:27.404587 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="extract-content" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404594 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="extract-content" Feb 03 13:11:27 crc kubenswrapper[4679]: E0203 13:11:27.404612 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="registry-server" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404620 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="registry-server" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404869 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a26ec6-63c5-47cb-9613-03824f4441ff" containerName="registry-server" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.404924 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="gather" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.405125 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c5058d-1324-4403-9bff-a9e204ecfe46" containerName="copy" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.406888 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.408906 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfpww"/"openshift-service-ca.crt" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.409871 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfpww"/"kube-root-ca.crt" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.415242 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfpww/must-gather-nf8c2"] Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.459550 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b22zh\" (UniqueName: \"kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.459613 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.460321 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pfpww"/"default-dockercfg-st6r9" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.561837 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b22zh\" (UniqueName: \"kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.561901 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.562394 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.583779 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b22zh\" (UniqueName: \"kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh\") pod \"must-gather-nf8c2\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:27 crc kubenswrapper[4679]: I0203 13:11:27.777248 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:11:28 crc kubenswrapper[4679]: I0203 13:11:28.282800 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfpww/must-gather-nf8c2"] Feb 03 13:11:28 crc kubenswrapper[4679]: I0203 13:11:28.652252 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/must-gather-nf8c2" event={"ID":"51323700-3c5a-476b-8470-9fb3abfd8c51","Type":"ContainerStarted","Data":"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b"} Feb 03 13:11:28 crc kubenswrapper[4679]: I0203 13:11:28.652584 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/must-gather-nf8c2" event={"ID":"51323700-3c5a-476b-8470-9fb3abfd8c51","Type":"ContainerStarted","Data":"2babaa7cc14a37c59eba1dae9b494afd876309b86f9ad05c6de3dd70d0f2f9dd"} Feb 03 13:11:29 crc kubenswrapper[4679]: I0203 13:11:29.663560 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/must-gather-nf8c2" event={"ID":"51323700-3c5a-476b-8470-9fb3abfd8c51","Type":"ContainerStarted","Data":"0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9"} Feb 03 13:11:29 crc kubenswrapper[4679]: I0203 13:11:29.680946 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfpww/must-gather-nf8c2" podStartSLOduration=2.680926082 podStartE2EDuration="2.680926082s" podCreationTimestamp="2026-02-03 13:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:11:29.676886552 +0000 UTC m=+3962.151782640" watchObservedRunningTime="2026-02-03 13:11:29.680926082 +0000 UTC m=+3962.155822170" Feb 03 13:11:32 crc kubenswrapper[4679]: I0203 13:11:32.213153 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:11:32 crc kubenswrapper[4679]: E0203 13:11:32.214983 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.012907 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfpww/crc-debug-clkwr"] Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.014314 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.071320 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f672r\" (UniqueName: \"kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.071465 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.172620 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f672r\" (UniqueName: \"kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.172746 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.172837 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.199151 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f672r\" (UniqueName: \"kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r\") pod \"crc-debug-clkwr\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.339607 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:11:33 crc kubenswrapper[4679]: W0203 13:11:33.366047 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a4070a5_a9b9_40a1_9d12_1aab4323c2cd.slice/crio-b99546c0220482979ce629104080e0bad57da010cae18ab86815233cb7bb1089 WatchSource:0}: Error finding container b99546c0220482979ce629104080e0bad57da010cae18ab86815233cb7bb1089: Status 404 returned error can't find the container with id b99546c0220482979ce629104080e0bad57da010cae18ab86815233cb7bb1089 Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.700797 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-clkwr" event={"ID":"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd","Type":"ContainerStarted","Data":"eff4d812890e36c0011ecc8258dd66d45c749ded027b9274aaff09ae3f5a7983"} Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.701141 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-clkwr" event={"ID":"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd","Type":"ContainerStarted","Data":"b99546c0220482979ce629104080e0bad57da010cae18ab86815233cb7bb1089"} Feb 03 13:11:33 crc kubenswrapper[4679]: I0203 13:11:33.723107 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfpww/crc-debug-clkwr" podStartSLOduration=1.723083972 podStartE2EDuration="1.723083972s" podCreationTimestamp="2026-02-03 13:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:11:33.716462858 +0000 UTC m=+3966.191358946" watchObservedRunningTime="2026-02-03 13:11:33.723083972 +0000 UTC m=+3966.197980060" Feb 03 13:11:45 crc kubenswrapper[4679]: I0203 13:11:45.211911 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:11:45 crc kubenswrapper[4679]: E0203 13:11:45.212873 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.012904 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-llh4q"] Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.015237 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.023255 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llh4q"] Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.139297 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-catalog-content\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.139642 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdpq6\" (UniqueName: \"kubernetes.io/projected/e535daa2-f963-4d21-aed1-118cce28fb76-kube-api-access-rdpq6\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.139827 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-utilities\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.212077 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:11:57 crc kubenswrapper[4679]: E0203 13:11:57.212468 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.242228 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-utilities\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.242331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-catalog-content\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.242490 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdpq6\" (UniqueName: \"kubernetes.io/projected/e535daa2-f963-4d21-aed1-118cce28fb76-kube-api-access-rdpq6\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.242888 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-utilities\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.242972 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e535daa2-f963-4d21-aed1-118cce28fb76-catalog-content\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.263878 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdpq6\" (UniqueName: \"kubernetes.io/projected/e535daa2-f963-4d21-aed1-118cce28fb76-kube-api-access-rdpq6\") pod \"redhat-operators-llh4q\" (UID: \"e535daa2-f963-4d21-aed1-118cce28fb76\") " pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.336086 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.813158 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llh4q"] Feb 03 13:11:57 crc kubenswrapper[4679]: I0203 13:11:57.961525 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llh4q" event={"ID":"e535daa2-f963-4d21-aed1-118cce28fb76","Type":"ContainerStarted","Data":"e6743cde7f2d96b46b677dc015247a47c34c0f1573c3e3b6f268b34284898112"} Feb 03 13:11:58 crc kubenswrapper[4679]: I0203 13:11:58.971869 4679 generic.go:334] "Generic (PLEG): container finished" podID="e535daa2-f963-4d21-aed1-118cce28fb76" containerID="000d1f5d685da2de6735f9f62f88557ae10b152e36d0664f7ef2dabb24ad1157" exitCode=0 Feb 03 13:11:58 crc kubenswrapper[4679]: I0203 13:11:58.971932 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llh4q" event={"ID":"e535daa2-f963-4d21-aed1-118cce28fb76","Type":"ContainerDied","Data":"000d1f5d685da2de6735f9f62f88557ae10b152e36d0664f7ef2dabb24ad1157"} Feb 03 13:12:07 crc kubenswrapper[4679]: I0203 13:12:07.045413 4679 generic.go:334] "Generic (PLEG): container finished" podID="8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" containerID="eff4d812890e36c0011ecc8258dd66d45c749ded027b9274aaff09ae3f5a7983" exitCode=0 Feb 03 13:12:07 crc kubenswrapper[4679]: I0203 13:12:07.045518 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-clkwr" event={"ID":"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd","Type":"ContainerDied","Data":"eff4d812890e36c0011ecc8258dd66d45c749ded027b9274aaff09ae3f5a7983"} Feb 03 13:12:09 crc kubenswrapper[4679]: I0203 13:12:09.984257 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.022739 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-clkwr"] Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.032906 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-clkwr"] Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.072524 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99546c0220482979ce629104080e0bad57da010cae18ab86815233cb7bb1089" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.072582 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-clkwr" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.103804 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host\") pod \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.103935 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host" (OuterVolumeSpecName: "host") pod "8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" (UID: "8a4070a5-a9b9-40a1-9d12-1aab4323c2cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.104002 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f672r\" (UniqueName: \"kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r\") pod \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\" (UID: \"8a4070a5-a9b9-40a1-9d12-1aab4323c2cd\") " Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.104544 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.109914 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r" (OuterVolumeSpecName: "kube-api-access-f672r") pod "8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" (UID: "8a4070a5-a9b9-40a1-9d12-1aab4323c2cd"). InnerVolumeSpecName "kube-api-access-f672r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.205798 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f672r\" (UniqueName: \"kubernetes.io/projected/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd-kube-api-access-f672r\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:10 crc kubenswrapper[4679]: I0203 13:12:10.222593 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" path="/var/lib/kubelet/pods/8a4070a5-a9b9-40a1-9d12-1aab4323c2cd/volumes" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.087928 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llh4q" event={"ID":"e535daa2-f963-4d21-aed1-118cce28fb76","Type":"ContainerStarted","Data":"6812699a6c61dbeb84be09864aa8b355f720195aaa316a08f74400b86aa64b22"} Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.195319 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfpww/crc-debug-m8sxn"] Feb 03 13:12:11 crc kubenswrapper[4679]: E0203 13:12:11.196052 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" containerName="container-00" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.196145 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" containerName="container-00" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.196551 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4070a5-a9b9-40a1-9d12-1aab4323c2cd" containerName="container-00" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.197407 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.211702 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:12:11 crc kubenswrapper[4679]: E0203 13:12:11.212133 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.326643 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.326691 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ff4\" (UniqueName: \"kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.428751 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.428804 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ff4\" (UniqueName: \"kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.428884 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.458909 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ff4\" (UniqueName: \"kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4\") pod \"crc-debug-m8sxn\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:11 crc kubenswrapper[4679]: I0203 13:12:11.517318 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:12 crc kubenswrapper[4679]: I0203 13:12:12.101964 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" event={"ID":"ca90aee2-5e46-4949-a3fc-5cedad0b14e7","Type":"ContainerStarted","Data":"112eaa141d99291628ac49853c7787affd7a266adc22366d73dd133843e85847"} Feb 03 13:12:13 crc kubenswrapper[4679]: I0203 13:12:13.117749 4679 generic.go:334] "Generic (PLEG): container finished" podID="e535daa2-f963-4d21-aed1-118cce28fb76" containerID="6812699a6c61dbeb84be09864aa8b355f720195aaa316a08f74400b86aa64b22" exitCode=0 Feb 03 13:12:13 crc kubenswrapper[4679]: I0203 13:12:13.117866 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llh4q" event={"ID":"e535daa2-f963-4d21-aed1-118cce28fb76","Type":"ContainerDied","Data":"6812699a6c61dbeb84be09864aa8b355f720195aaa316a08f74400b86aa64b22"} Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.135745 4679 generic.go:334] "Generic (PLEG): container finished" podID="ca90aee2-5e46-4949-a3fc-5cedad0b14e7" containerID="76e999a68898e841d360c79149a9ef3eac921c177ed7b2c9691a00c7d313b507" exitCode=0 Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.135845 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" event={"ID":"ca90aee2-5e46-4949-a3fc-5cedad0b14e7","Type":"ContainerDied","Data":"76e999a68898e841d360c79149a9ef3eac921c177ed7b2c9691a00c7d313b507"} Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.139528 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llh4q" event={"ID":"e535daa2-f963-4d21-aed1-118cce28fb76","Type":"ContainerStarted","Data":"c87f06a7e5f27027243f6b01979ef9865a14c35d674d14a4d40961b3cbc1679f"} Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.185843 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-llh4q" podStartSLOduration=3.616756467 podStartE2EDuration="19.185815152s" podCreationTimestamp="2026-02-03 13:11:56 +0000 UTC" firstStartedPulling="2026-02-03 13:11:58.975473913 +0000 UTC m=+3991.450370001" lastFinishedPulling="2026-02-03 13:12:14.544532598 +0000 UTC m=+4007.019428686" observedRunningTime="2026-02-03 13:12:15.167468998 +0000 UTC m=+4007.642365096" watchObservedRunningTime="2026-02-03 13:12:15.185815152 +0000 UTC m=+4007.660711240" Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.508304 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-m8sxn"] Feb 03 13:12:15 crc kubenswrapper[4679]: I0203 13:12:15.519172 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-m8sxn"] Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.250567 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.320698 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host\") pod \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.320821 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host" (OuterVolumeSpecName: "host") pod "ca90aee2-5e46-4949-a3fc-5cedad0b14e7" (UID: "ca90aee2-5e46-4949-a3fc-5cedad0b14e7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.320905 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ff4\" (UniqueName: \"kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4\") pod \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\" (UID: \"ca90aee2-5e46-4949-a3fc-5cedad0b14e7\") " Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.323518 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.333229 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4" (OuterVolumeSpecName: "kube-api-access-k7ff4") pod "ca90aee2-5e46-4949-a3fc-5cedad0b14e7" (UID: "ca90aee2-5e46-4949-a3fc-5cedad0b14e7"). InnerVolumeSpecName "kube-api-access-k7ff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.425515 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ff4\" (UniqueName: \"kubernetes.io/projected/ca90aee2-5e46-4949-a3fc-5cedad0b14e7-kube-api-access-k7ff4\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.905435 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfpww/crc-debug-hjwqf"] Feb 03 13:12:16 crc kubenswrapper[4679]: E0203 13:12:16.906220 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca90aee2-5e46-4949-a3fc-5cedad0b14e7" containerName="container-00" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.906245 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca90aee2-5e46-4949-a3fc-5cedad0b14e7" containerName="container-00" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.906475 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca90aee2-5e46-4949-a3fc-5cedad0b14e7" containerName="container-00" Feb 03 13:12:16 crc kubenswrapper[4679]: I0203 13:12:16.907039 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.047060 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmwx\" (UniqueName: \"kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.047197 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.149413 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.149592 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmwx\" (UniqueName: \"kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.149901 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.157577 4679 scope.go:117] "RemoveContainer" containerID="76e999a68898e841d360c79149a9ef3eac921c177ed7b2c9691a00c7d313b507" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.157633 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-m8sxn" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.167150 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmwx\" (UniqueName: \"kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx\") pod \"crc-debug-hjwqf\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.225055 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:17 crc kubenswrapper[4679]: W0203 13:12:17.246487 4679 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod030eb74c_f8cc_4ab4_aade_d24edf5ea9b3.slice/crio-5c728080d613495b47d957c824548b03dfa75b6ae1b51e0999d5b4316345d4a2 WatchSource:0}: Error finding container 5c728080d613495b47d957c824548b03dfa75b6ae1b51e0999d5b4316345d4a2: Status 404 returned error can't find the container with id 5c728080d613495b47d957c824548b03dfa75b6ae1b51e0999d5b4316345d4a2 Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.336559 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:12:17 crc kubenswrapper[4679]: I0203 13:12:17.338576 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:12:18 crc kubenswrapper[4679]: I0203 13:12:18.168500 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" event={"ID":"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3","Type":"ContainerStarted","Data":"5c728080d613495b47d957c824548b03dfa75b6ae1b51e0999d5b4316345d4a2"} Feb 03 13:12:18 crc kubenswrapper[4679]: I0203 13:12:18.224298 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca90aee2-5e46-4949-a3fc-5cedad0b14e7" path="/var/lib/kubelet/pods/ca90aee2-5e46-4949-a3fc-5cedad0b14e7/volumes" Feb 03 13:12:18 crc kubenswrapper[4679]: I0203 13:12:18.877409 4679 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-llh4q" podUID="e535daa2-f963-4d21-aed1-118cce28fb76" containerName="registry-server" probeResult="failure" output=< Feb 03 13:12:18 crc kubenswrapper[4679]: timeout: failed to connect service ":50051" within 1s Feb 03 13:12:18 crc kubenswrapper[4679]: > Feb 03 13:12:19 crc kubenswrapper[4679]: I0203 13:12:19.176596 4679 generic.go:334] "Generic (PLEG): container finished" podID="030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" containerID="4cb12f385cada405f29e3e31f32a293c61e8e0a684be47a15273be2b642f492f" exitCode=0 Feb 03 13:12:19 crc kubenswrapper[4679]: I0203 13:12:19.176649 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" event={"ID":"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3","Type":"ContainerDied","Data":"4cb12f385cada405f29e3e31f32a293c61e8e0a684be47a15273be2b642f492f"} Feb 03 13:12:19 crc kubenswrapper[4679]: I0203 13:12:19.213060 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-hjwqf"] Feb 03 13:12:19 crc kubenswrapper[4679]: I0203 13:12:19.221858 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfpww/crc-debug-hjwqf"] Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.291274 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.405452 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host\") pod \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.405552 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lmwx\" (UniqueName: \"kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx\") pod \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\" (UID: \"030eb74c-f8cc-4ab4-aade-d24edf5ea9b3\") " Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.405591 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host" (OuterVolumeSpecName: "host") pod "030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" (UID: "030eb74c-f8cc-4ab4-aade-d24edf5ea9b3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.405961 4679 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.411846 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx" (OuterVolumeSpecName: "kube-api-access-6lmwx") pod "030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" (UID: "030eb74c-f8cc-4ab4-aade-d24edf5ea9b3"). InnerVolumeSpecName "kube-api-access-6lmwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:20 crc kubenswrapper[4679]: I0203 13:12:20.507455 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lmwx\" (UniqueName: \"kubernetes.io/projected/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3-kube-api-access-6lmwx\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:21 crc kubenswrapper[4679]: I0203 13:12:21.192602 4679 scope.go:117] "RemoveContainer" containerID="4cb12f385cada405f29e3e31f32a293c61e8e0a684be47a15273be2b642f492f" Feb 03 13:12:21 crc kubenswrapper[4679]: I0203 13:12:21.192695 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/crc-debug-hjwqf" Feb 03 13:12:22 crc kubenswrapper[4679]: I0203 13:12:22.211543 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:12:22 crc kubenswrapper[4679]: E0203 13:12:22.211933 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:12:22 crc kubenswrapper[4679]: I0203 13:12:22.222834 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" path="/var/lib/kubelet/pods/030eb74c-f8cc-4ab4-aade-d24edf5ea9b3/volumes" Feb 03 13:12:27 crc kubenswrapper[4679]: I0203 13:12:27.390442 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:12:27 crc kubenswrapper[4679]: I0203 13:12:27.461172 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-llh4q" Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.049381 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llh4q"] Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.225915 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.226746 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rs7cm" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="registry-server" containerID="cri-o://a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad" gracePeriod=2 Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.763199 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.889628 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klbrt\" (UniqueName: \"kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt\") pod \"330ffce1-de6e-4402-8bb5-52976082c21e\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.889918 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content\") pod \"330ffce1-de6e-4402-8bb5-52976082c21e\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.890068 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities\") pod \"330ffce1-de6e-4402-8bb5-52976082c21e\" (UID: \"330ffce1-de6e-4402-8bb5-52976082c21e\") " Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.890923 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities" (OuterVolumeSpecName: "utilities") pod "330ffce1-de6e-4402-8bb5-52976082c21e" (UID: "330ffce1-de6e-4402-8bb5-52976082c21e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.993232 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:28 crc kubenswrapper[4679]: I0203 13:12:28.997234 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "330ffce1-de6e-4402-8bb5-52976082c21e" (UID: "330ffce1-de6e-4402-8bb5-52976082c21e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.095504 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330ffce1-de6e-4402-8bb5-52976082c21e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.278314 4679 generic.go:334] "Generic (PLEG): container finished" podID="330ffce1-de6e-4402-8bb5-52976082c21e" containerID="a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad" exitCode=0 Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.278417 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerDied","Data":"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad"} Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.278491 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rs7cm" event={"ID":"330ffce1-de6e-4402-8bb5-52976082c21e","Type":"ContainerDied","Data":"aef398a500d0fab6f8747e89ab50bad7d9d940c655079a6471098d92619ba828"} Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.278512 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rs7cm" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.278519 4679 scope.go:117] "RemoveContainer" containerID="a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.308094 4679 scope.go:117] "RemoveContainer" containerID="cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.386678 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt" (OuterVolumeSpecName: "kube-api-access-klbrt") pod "330ffce1-de6e-4402-8bb5-52976082c21e" (UID: "330ffce1-de6e-4402-8bb5-52976082c21e"). InnerVolumeSpecName "kube-api-access-klbrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.398415 4679 scope.go:117] "RemoveContainer" containerID="1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.411697 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klbrt\" (UniqueName: \"kubernetes.io/projected/330ffce1-de6e-4402-8bb5-52976082c21e-kube-api-access-klbrt\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.606573 4679 scope.go:117] "RemoveContainer" containerID="a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad" Feb 03 13:12:29 crc kubenswrapper[4679]: E0203 13:12:29.607409 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad\": container with ID starting with a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad not found: ID does not exist" containerID="a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.607454 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad"} err="failed to get container status \"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad\": rpc error: code = NotFound desc = could not find container \"a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad\": container with ID starting with a3b1196cfbbe0c35178e0bc324750a2ceeff9164a71b4e1abce6bdc708847fad not found: ID does not exist" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.607481 4679 scope.go:117] "RemoveContainer" containerID="cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819" Feb 03 13:12:29 crc kubenswrapper[4679]: E0203 13:12:29.607881 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819\": container with ID starting with cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819 not found: ID does not exist" containerID="cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.607932 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819"} err="failed to get container status \"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819\": rpc error: code = NotFound desc = could not find container \"cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819\": container with ID starting with cf0930b9403bd0dad60895a7548332f70106daa8c3f79ea2161efdf88c50a819 not found: ID does not exist" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.607961 4679 scope.go:117] "RemoveContainer" containerID="1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662" Feb 03 13:12:29 crc kubenswrapper[4679]: E0203 13:12:29.608253 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662\": container with ID starting with 1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662 not found: ID does not exist" containerID="1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.608292 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662"} err="failed to get container status \"1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662\": rpc error: code = NotFound desc = could not find container \"1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662\": container with ID starting with 1900dcaad154655b306e9015869db00a008fda173a392bb7def0025f967da662 not found: ID does not exist" Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.669325 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 13:12:29 crc kubenswrapper[4679]: I0203 13:12:29.679854 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rs7cm"] Feb 03 13:12:30 crc kubenswrapper[4679]: I0203 13:12:30.225142 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" path="/var/lib/kubelet/pods/330ffce1-de6e-4402-8bb5-52976082c21e/volumes" Feb 03 13:12:37 crc kubenswrapper[4679]: I0203 13:12:37.212718 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:12:37 crc kubenswrapper[4679]: E0203 13:12:37.213931 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:12:48 crc kubenswrapper[4679]: I0203 13:12:48.220240 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:12:48 crc kubenswrapper[4679]: E0203 13:12:48.221175 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:12:52 crc kubenswrapper[4679]: I0203 13:12:52.635558 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c4bcd4b-hkzh6_2aa77b26-ca52-4ef9-a1c2-68237a080e1b/barbican-api/0.log" Feb 03 13:12:52 crc kubenswrapper[4679]: I0203 13:12:52.852234 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6c4bcd4b-hkzh6_2aa77b26-ca52-4ef9-a1c2-68237a080e1b/barbican-api-log/0.log" Feb 03 13:12:52 crc kubenswrapper[4679]: I0203 13:12:52.879433 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d4655f6d4-rdwtj_91c5b9c5-d4c7-4138-90de-ee51de9f7a5f/barbican-keystone-listener/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.076825 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d4655f6d4-rdwtj_91c5b9c5-d4c7-4138-90de-ee51de9f7a5f/barbican-keystone-listener-log/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.097805 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-789dd74f99-dtwb4_0786ef5c-404a-4c24-8188-d757082c1419/barbican-worker/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.133831 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-789dd74f99-dtwb4_0786ef5c-404a-4c24-8188-d757082c1419/barbican-worker-log/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.269579 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bgh5w_83eaca34-8d94-48a8-8e56-58db37e376ab/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.359426 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/ceilometer-central-agent/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.559709 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/ceilometer-notification-agent/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.599955 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/sg-core/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.600108 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3c9a97ad-868b-4b32-b200-ee3cb3ad9098/proxy-httpd/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.828987 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbe4378f-83bf-420b-b73a-185c57ab9771/cinder-api/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.860822 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbe4378f-83bf-420b-b73a-185c57ab9771/cinder-api-log/0.log" Feb 03 13:12:53 crc kubenswrapper[4679]: I0203 13:12:53.966911 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4882a26d-4240-46b5-917c-dc6842916963/cinder-scheduler/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.074848 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4882a26d-4240-46b5-917c-dc6842916963/probe/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.089204 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xmthc_21b7ed40-7c17-44bd-9ad8-f47f21ea4e84/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.273691 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4bxc4_4d6011df-83a4-4d86-ac66-61b00cd615d4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.349225 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/init/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.529964 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/init/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.581348 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hmnl9_aa12b60d-98f3-42a6-b429-cd451b1ec5fc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.601651 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-bgkzg_d3a4bf4d-7cf5-4026-acdf-53345ca82af1/dnsmasq-dns/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.854225 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b1d9c6da-29c6-43e7-92a6-ee0c5901c36b/glance-httpd/0.log" Feb 03 13:12:54 crc kubenswrapper[4679]: I0203 13:12:54.887081 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b1d9c6da-29c6-43e7-92a6-ee0c5901c36b/glance-log/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.059150 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1672261a-caab-4c72-9be3-78b40978e2cf/glance-log/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.059723 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1672261a-caab-4c72-9be3-78b40978e2cf/glance-httpd/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.249529 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74557bdb5d-lsfq8_a09ad5f1-6af1-452d-a08f-271579ecb3d1/horizon/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.376224 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gfkxh_bf811a1c-76b7-4b43-b658-d68388d38cb8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.606672 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-whhrf_2399747b-7fec-4916-8a58-13a53de36d78/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.673588 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-74557bdb5d-lsfq8_a09ad5f1-6af1-452d-a08f-271579ecb3d1/horizon-log/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.922479 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29502061-wb5cr_23cead04-2ba2-47aa-8b2c-fe29c2a25fb3/keystone-cron/0.log" Feb 03 13:12:55 crc kubenswrapper[4679]: I0203 13:12:55.951957 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b6496f477-9vvrm_0d40e305-3fdf-4ce8-a586-7f2b9786e0eb/keystone-api/0.log" Feb 03 13:12:56 crc kubenswrapper[4679]: I0203 13:12:56.121958 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cba05e44-77a6-4a44-84c6-8bb482680662/kube-state-metrics/0.log" Feb 03 13:12:56 crc kubenswrapper[4679]: I0203 13:12:56.234770 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5wjx5_67eba320-30c8-4f6e-9958-f58ee00e9bdc/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:56 crc kubenswrapper[4679]: I0203 13:12:56.661251 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b98ff4cb5-lk4d9_fd629794-5ce3-4d07-9f6c-c0a85424379f/neutron-httpd/0.log" Feb 03 13:12:56 crc kubenswrapper[4679]: I0203 13:12:56.663213 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b98ff4cb5-lk4d9_fd629794-5ce3-4d07-9f6c-c0a85424379f/neutron-api/0.log" Feb 03 13:12:56 crc kubenswrapper[4679]: I0203 13:12:56.941445 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6jc2_cc385278-b837-4f12-bd6f-5fdd89b02bd7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:57 crc kubenswrapper[4679]: I0203 13:12:57.494431 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0144e14a-b09d-4182-8008-358b3032b05c/nova-api-log/0.log" Feb 03 13:12:57 crc kubenswrapper[4679]: I0203 13:12:57.570205 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e72b3e9b-a5ec-43f1-a286-43f2ce2f5240/nova-cell0-conductor-conductor/0.log" Feb 03 13:12:57 crc kubenswrapper[4679]: I0203 13:12:57.879191 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0144e14a-b09d-4182-8008-358b3032b05c/nova-api-api/0.log" Feb 03 13:12:57 crc kubenswrapper[4679]: I0203 13:12:57.978691 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ff89b1cf-ed12-47b9-a9ca-2f9c1a5d35d9/nova-cell1-conductor-conductor/0.log" Feb 03 13:12:57 crc kubenswrapper[4679]: I0203 13:12:57.983761 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bafb8aaf-7819-4978-aaae-7d26a4a126b6/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 13:12:58 crc kubenswrapper[4679]: I0203 13:12:58.223432 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-82h87_b00ee047-2435-41ba-b376-be13d8309d1f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:12:58 crc kubenswrapper[4679]: I0203 13:12:58.416546 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_43e2a214-af77-4834-9af8-6435c0cc24ba/nova-metadata-log/0.log" Feb 03 13:12:58 crc kubenswrapper[4679]: I0203 13:12:58.805930 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f3933651-b0cd-48e8-bcf4-b6ec20930d3b/nova-scheduler-scheduler/0.log" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.212671 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:12:59 crc kubenswrapper[4679]: E0203 13:12:59.212907 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.498955 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/mysql-bootstrap/0.log" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.737183 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/mysql-bootstrap/0.log" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.751778 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b788c2a3-0e8f-4a4a-b121-f4c021b4932c/galera/0.log" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.835421 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_43e2a214-af77-4834-9af8-6435c0cc24ba/nova-metadata-metadata/0.log" Feb 03 13:12:59 crc kubenswrapper[4679]: I0203 13:12:59.928718 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/mysql-bootstrap/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.129958 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/mysql-bootstrap/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.138488 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e8a14eb9-fdf3-44dc-b8a8-0494fd209dea/galera/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.214800 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9015531f-675f-40b4-a643-94a33a87592b/openstackclient/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.421516 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5tt4c_c908c598-a229-467c-8430-de77205f95ec/ovn-controller/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.460744 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kmmd2_54fecd77-d186-4510-9e06-4ff67edee154/openstack-network-exporter/0.log" Feb 03 13:13:00 crc kubenswrapper[4679]: I0203 13:13:00.641637 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server-init/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.384936 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovs-vswitchd/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.410727 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server-init/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.414506 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9zrqh_4d10dd12-5213-414c-bd2b-76396833ad19/ovsdb-server/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.601347 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mfmjt_cc05f31e-be8f-497a-ba7b-1f5c54d070c4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.645684 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4275cf53-917f-4b88-9832-b3f9da33b445/ovn-northd/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.692075 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4275cf53-917f-4b88-9832-b3f9da33b445/openstack-network-exporter/0.log" Feb 03 13:13:01 crc kubenswrapper[4679]: I0203 13:13:01.967034 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bf6e3dac-ec8b-422b-9459-3554f884594d/openstack-network-exporter/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.092318 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bf6e3dac-ec8b-422b-9459-3554f884594d/ovsdbserver-nb/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.196695 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea35f3b6-94df-45c5-9b94-af55636b7ad0/ovsdbserver-sb/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.315230 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ea35f3b6-94df-45c5-9b94-af55636b7ad0/openstack-network-exporter/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.393184 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79464686c6-vwq7l_b242f52b-0a15-4493-9da2-15aca091df48/placement-api/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.581876 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79464686c6-vwq7l_b242f52b-0a15-4493-9da2-15aca091df48/placement-log/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.619793 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/setup-container/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.880803 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/setup-container/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.950063 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/setup-container/0.log" Feb 03 13:13:02 crc kubenswrapper[4679]: I0203 13:13:02.991401 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_666e9640-9258-44a6-980d-e79d1dc7f2b3/rabbitmq/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.187138 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/rabbitmq/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.242698 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_891b9bf5-a68a-4118-a002-3b74879fac0b/setup-container/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.263713 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-q6hfs_e4d8a4b6-343a-4fa1-9fc1-8d84ef3e0b61/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.503235 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5bzht_b709c6fe-9a41-44fd-9350-989aa43947da/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.576264 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fs98s_6fc50826-5b8c-4973-bef7-78e861d37c96/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.718288 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lkc9z_e11ad6fe-5a94-4797-a827-ca1918e67f79/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:03 crc kubenswrapper[4679]: I0203 13:13:03.884597 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zkxjk_cc4f8656-bb8e-4bbe-aee5-ec553e2cf14b/ssh-known-hosts-edpm-deployment/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.081123 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56b6b9b667-hn9mj_7034878f-0540-438b-b9b3-5e726c04e49c/proxy-server/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.161025 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56b6b9b667-hn9mj_7034878f-0540-438b-b9b3-5e726c04e49c/proxy-httpd/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.278101 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7mtlb_43821977-e5d9-4405-b6c6-d739a8fea389/swift-ring-rebalance/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.425389 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-reaper/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.433228 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-auditor/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.549858 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-replicator/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.640728 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-auditor/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.643419 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/account-server/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.750008 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-replicator/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.878443 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-updater/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.884872 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/container-server/0.log" Feb 03 13:13:04 crc kubenswrapper[4679]: I0203 13:13:04.918617 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-auditor/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.043646 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-expirer/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.136124 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-server/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.147871 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-replicator/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.194826 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/object-updater/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.273579 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/rsync/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.351767 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_17121344-4061-43d2-bf89-7a3684b88461/swift-recon-cron/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.538404 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g8gjq_fbcf4978-33e8-4444-b972-dd9859e52ec0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.580213 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a210a822-5111-45b8-9068-e745a7471962/tempest-tests-tempest-tests-runner/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.757767 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_322a6172-ec37-452f-b49c-15af3f777c8a/test-operator-logs-container/0.log" Feb 03 13:13:05 crc kubenswrapper[4679]: I0203 13:13:05.839504 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6ktlz_3b077848-9e84-4914-83b8-d47ebe659982/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:13:10 crc kubenswrapper[4679]: I0203 13:13:10.211390 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:13:10 crc kubenswrapper[4679]: E0203 13:13:10.213323 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:13:14 crc kubenswrapper[4679]: I0203 13:13:14.508206 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9e899cda-42d0-40ae-a9c6-34f4bbad9fe7/memcached/0.log" Feb 03 13:13:22 crc kubenswrapper[4679]: I0203 13:13:22.211959 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:13:22 crc kubenswrapper[4679]: E0203 13:13:22.212810 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:13:32 crc kubenswrapper[4679]: I0203 13:13:32.969411 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.123163 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.163375 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.182702 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.212220 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:13:33 crc kubenswrapper[4679]: E0203 13:13:33.212494 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.335885 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/util/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.355492 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/pull/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.384900 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d21a160f72c3e3eb10489ff07ee6e72cf737044df1c336ddff9eead2b4jlfj_97a9d5bd-ce1a-48df-8335-cb7c06ea40d5/extract/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.598871 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-xhb56_d39a188d-08b7-4670-a5da-c65da1b30936/manager/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.611261 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-6l9l6_3722274c-5a6f-49ef-89ac-06fc5afd3098/manager/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.746868 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-x9kws_d96d5316-a678-427e-aa6f-a606876142d3/manager/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.890187 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-vgbcs_e92384fd-2d3b-4ba9-b265-92dbc9941750/manager/0.log" Feb 03 13:13:33 crc kubenswrapper[4679]: I0203 13:13:33.971248 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-k5gcz_9ebeeb0d-99ac-4e30-93cf-9feb4cac17d4/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.083095 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7p976_ee886e3f-df4d-43e4-b1ad-8eec77ead216/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.359298 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-pwgd6_36b08aa8-071f-4862-821c-9ee85afcdf8e/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.388608 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vgg4d_2de6e912-5456-4209-85d7-2bddcedc0384/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.588509 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-h77pz_b498e6cd-6f07-461f-bf7a-5842461cbbbe/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.640655 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-m6jbm_ee3e0d19-7d26-4e63-8859-f1a2596a0ba5/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.788572 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:34 crc kubenswrapper[4679]: E0203 13:13:34.788945 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" containerName="container-00" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.788961 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" containerName="container-00" Feb 03 13:13:34 crc kubenswrapper[4679]: E0203 13:13:34.788978 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="extract-utilities" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.788985 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="extract-utilities" Feb 03 13:13:34 crc kubenswrapper[4679]: E0203 13:13:34.788994 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="extract-content" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.789001 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="extract-content" Feb 03 13:13:34 crc kubenswrapper[4679]: E0203 13:13:34.789021 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="registry-server" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.789027 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="registry-server" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.789203 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="330ffce1-de6e-4402-8bb5-52976082c21e" containerName="registry-server" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.789221 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="030eb74c-f8cc-4ab4-aade-d24edf5ea9b3" containerName="container-00" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.790451 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.822276 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.823021 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-nvx58_a0fa5212-9380-4d21-a8ae-a400eb674de3/manager/0.log" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.862251 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.862599 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.862646 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7ns2\" (UniqueName: \"kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.964344 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.964710 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.964749 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7ns2\" (UniqueName: \"kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.965068 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.965331 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:34 crc kubenswrapper[4679]: I0203 13:13:34.987190 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7ns2\" (UniqueName: \"kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2\") pod \"redhat-marketplace-nwbpk\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.036197 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-8gc44_6d552366-fc97-4365-8abd-5b32b28a09b2/manager/0.log" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.112561 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.196631 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bllmz_79b06c14-7e75-4306-8001-3217809de327/manager/0.log" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.573995 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-4pvxk_e25213d7-4c75-46b8-b39b-44e75557c434/manager/0.log" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.805947 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.842173 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d76m8x_11b2dd9f-a9fc-427c-a2a2-744484f359b4/manager/0.log" Feb 03 13:13:35 crc kubenswrapper[4679]: I0203 13:13:35.991604 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68c5f5659f-77cqz_52069189-49bf-46cc-b13d-b7705a4e68f1/operator/0.log" Feb 03 13:13:36 crc kubenswrapper[4679]: I0203 13:13:36.018664 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerStarted","Data":"f78ac6c8a711059e99a5cdf2f63634c786814c2ea6fbafd30f953a84ac3518ee"} Feb 03 13:13:36 crc kubenswrapper[4679]: I0203 13:13:36.291812 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-248q5_cf6f1209-4fa8-4e3c-ba2d-0ebc986ead4a/registry-server/0.log" Feb 03 13:13:36 crc kubenswrapper[4679]: I0203 13:13:36.436637 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-42stf_1f76a687-e27f-4d78-aeea-c2faca503549/manager/0.log" Feb 03 13:13:36 crc kubenswrapper[4679]: I0203 13:13:36.609245 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-46lw2_6b1f821d-79a5-4fe4-bc8a-f850716781e7/manager/0.log" Feb 03 13:13:36 crc kubenswrapper[4679]: I0203 13:13:36.777087 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ddt7p_ebf666dd-6b96-4907-8024-800d9634590f/operator/0.log" Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.022845 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-599dbc9849-9t5wf_632ab40b-9540-48ad-b1c7-7b5b1603e4d2/manager/0.log" Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.032415 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-vxkt7_35892343-44c5-4cfb-9061-0b0542d23b99/manager/0.log" Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.051886 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerID="202d4dfb8757262bf99942ed7598599b499cf9645d7482db110ce0789dcdf88d" exitCode=0 Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.051936 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerDied","Data":"202d4dfb8757262bf99942ed7598599b499cf9645d7482db110ce0789dcdf88d"} Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.055242 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.300456 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-wktnn_8e3f82d2-bf0a-4203-80af-3b48711ad1f0/manager/0.log" Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.693105 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-89mxg_dedc1caa-ae76-49df-818b-49e570c09a31/manager/0.log" Feb 03 13:13:37 crc kubenswrapper[4679]: I0203 13:13:37.697347 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-w6xc9_3f6911aa-e91a-4ab6-b2cd-0c1a08977a57/manager/0.log" Feb 03 13:13:39 crc kubenswrapper[4679]: I0203 13:13:39.069880 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerID="017cb766d6e4d529eb42c72633cf7070bbbdcbc9ba717805a780454ca87de705" exitCode=0 Feb 03 13:13:39 crc kubenswrapper[4679]: I0203 13:13:39.069962 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerDied","Data":"017cb766d6e4d529eb42c72633cf7070bbbdcbc9ba717805a780454ca87de705"} Feb 03 13:13:40 crc kubenswrapper[4679]: I0203 13:13:40.089400 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerStarted","Data":"13006fe0bac39884986445166aa8b0b88d8a66dcfe580d8a1f94799e42a7ef7a"} Feb 03 13:13:40 crc kubenswrapper[4679]: I0203 13:13:40.117240 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwbpk" podStartSLOduration=3.6775520779999997 podStartE2EDuration="6.117217217s" podCreationTimestamp="2026-02-03 13:13:34 +0000 UTC" firstStartedPulling="2026-02-03 13:13:37.054833045 +0000 UTC m=+4089.529729133" lastFinishedPulling="2026-02-03 13:13:39.494498174 +0000 UTC m=+4091.969394272" observedRunningTime="2026-02-03 13:13:40.110570232 +0000 UTC m=+4092.585466330" watchObservedRunningTime="2026-02-03 13:13:40.117217217 +0000 UTC m=+4092.592113295" Feb 03 13:13:45 crc kubenswrapper[4679]: I0203 13:13:45.113465 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:45 crc kubenswrapper[4679]: I0203 13:13:45.114098 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:45 crc kubenswrapper[4679]: I0203 13:13:45.167688 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:45 crc kubenswrapper[4679]: I0203 13:13:45.225490 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:45 crc kubenswrapper[4679]: I0203 13:13:45.779336 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:47 crc kubenswrapper[4679]: I0203 13:13:47.140322 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwbpk" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="registry-server" containerID="cri-o://13006fe0bac39884986445166aa8b0b88d8a66dcfe580d8a1f94799e42a7ef7a" gracePeriod=2 Feb 03 13:13:47 crc kubenswrapper[4679]: I0203 13:13:47.212236 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:13:47 crc kubenswrapper[4679]: E0203 13:13:47.212833 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.150767 4679 generic.go:334] "Generic (PLEG): container finished" podID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerID="13006fe0bac39884986445166aa8b0b88d8a66dcfe580d8a1f94799e42a7ef7a" exitCode=0 Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.150844 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerDied","Data":"13006fe0bac39884986445166aa8b0b88d8a66dcfe580d8a1f94799e42a7ef7a"} Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.151115 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwbpk" event={"ID":"c8b2fa4c-bbce-450f-96b5-77177c7339a9","Type":"ContainerDied","Data":"f78ac6c8a711059e99a5cdf2f63634c786814c2ea6fbafd30f953a84ac3518ee"} Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.151129 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78ac6c8a711059e99a5cdf2f63634c786814c2ea6fbafd30f953a84ac3518ee" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.232878 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.337198 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content\") pod \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.337527 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities\") pod \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.337826 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7ns2\" (UniqueName: \"kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2\") pod \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\" (UID: \"c8b2fa4c-bbce-450f-96b5-77177c7339a9\") " Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.338908 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities" (OuterVolumeSpecName: "utilities") pod "c8b2fa4c-bbce-450f-96b5-77177c7339a9" (UID: "c8b2fa4c-bbce-450f-96b5-77177c7339a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.339482 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.343659 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2" (OuterVolumeSpecName: "kube-api-access-v7ns2") pod "c8b2fa4c-bbce-450f-96b5-77177c7339a9" (UID: "c8b2fa4c-bbce-450f-96b5-77177c7339a9"). InnerVolumeSpecName "kube-api-access-v7ns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.367922 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8b2fa4c-bbce-450f-96b5-77177c7339a9" (UID: "c8b2fa4c-bbce-450f-96b5-77177c7339a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.441967 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7ns2\" (UniqueName: \"kubernetes.io/projected/c8b2fa4c-bbce-450f-96b5-77177c7339a9-kube-api-access-v7ns2\") on node \"crc\" DevicePath \"\"" Feb 03 13:13:48 crc kubenswrapper[4679]: I0203 13:13:48.441999 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b2fa4c-bbce-450f-96b5-77177c7339a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:13:49 crc kubenswrapper[4679]: I0203 13:13:49.157706 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwbpk" Feb 03 13:13:49 crc kubenswrapper[4679]: I0203 13:13:49.190002 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:49 crc kubenswrapper[4679]: I0203 13:13:49.199729 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwbpk"] Feb 03 13:13:50 crc kubenswrapper[4679]: I0203 13:13:50.227767 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" path="/var/lib/kubelet/pods/c8b2fa4c-bbce-450f-96b5-77177c7339a9/volumes" Feb 03 13:13:58 crc kubenswrapper[4679]: I0203 13:13:58.651450 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mlccb_00b9ca4d-dce2-4baa-b9ce-0eda632507e7/control-plane-machine-set-operator/0.log" Feb 03 13:13:58 crc kubenswrapper[4679]: I0203 13:13:58.783732 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cshmm_75e7133e-70dc-4896-bac7-d159e39737c1/kube-rbac-proxy/0.log" Feb 03 13:13:58 crc kubenswrapper[4679]: I0203 13:13:58.844645 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cshmm_75e7133e-70dc-4896-bac7-d159e39737c1/machine-api-operator/0.log" Feb 03 13:13:59 crc kubenswrapper[4679]: I0203 13:13:59.212734 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:13:59 crc kubenswrapper[4679]: E0203 13:13:59.213057 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:14:11 crc kubenswrapper[4679]: I0203 13:14:11.212585 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:14:11 crc kubenswrapper[4679]: I0203 13:14:11.890461 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-j5c2m_dc20b5f8-7353-4785-ac36-1f263f60b102/cert-manager-controller/0.log" Feb 03 13:14:12 crc kubenswrapper[4679]: I0203 13:14:12.023530 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-m6rvf_1934112b-b7de-4e8a-a94c-696e9a9412cd/cert-manager-cainjector/0.log" Feb 03 13:14:12 crc kubenswrapper[4679]: I0203 13:14:12.170296 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xxlm9_bcb6f977-4961-473e-afe5-be2b055270e6/cert-manager-webhook/0.log" Feb 03 13:14:12 crc kubenswrapper[4679]: I0203 13:14:12.365636 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489"} Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.132825 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wgq45_ee8ef129-bd8a-4296-9ac9-8bad21434ec6/nmstate-console-plugin/0.log" Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.289108 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-s6vrk_bd7b6e88-5c1f-4d52-9876-a99fa1c3e6c4/nmstate-handler/0.log" Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.343868 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-n5w4n_5d978da5-5322-40f9-a7ea-c7dd2295874f/kube-rbac-proxy/0.log" Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.355295 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-n5w4n_5d978da5-5322-40f9-a7ea-c7dd2295874f/nmstate-metrics/0.log" Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.557389 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l95cn_ad475107-250b-403a-8563-b90f107e4f89/nmstate-operator/0.log" Feb 03 13:14:26 crc kubenswrapper[4679]: I0203 13:14:26.580158 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-5fck4_4a9f66b5-a4ee-40b5-95cf-159557632d17/nmstate-webhook/0.log" Feb 03 13:14:54 crc kubenswrapper[4679]: I0203 13:14:54.835932 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sd924_8f94f678-3ab0-4078-b6ad-361e9326083c/kube-rbac-proxy/0.log" Feb 03 13:14:54 crc kubenswrapper[4679]: I0203 13:14:54.966166 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sd924_8f94f678-3ab0-4078-b6ad-361e9326083c/controller/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.020539 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.255909 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.269554 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.288914 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.308535 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.460582 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.467467 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.507081 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.514524 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.640022 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-frr-files/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.675689 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-reloader/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.679752 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/cp-metrics/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.683029 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/controller/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.876140 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/kube-rbac-proxy/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.942141 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/frr-metrics/0.log" Feb 03 13:14:55 crc kubenswrapper[4679]: I0203 13:14:55.942865 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/kube-rbac-proxy-frr/0.log" Feb 03 13:14:56 crc kubenswrapper[4679]: I0203 13:14:56.078748 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/reloader/0.log" Feb 03 13:14:56 crc kubenswrapper[4679]: I0203 13:14:56.123785 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-w9d6c_97eb643e-6db5-4612-acbf-eef52bbd1cba/frr-k8s-webhook-server/0.log" Feb 03 13:14:56 crc kubenswrapper[4679]: I0203 13:14:56.390959 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-696d65d798-4rvqz_2b8aafdc-129f-420c-a901-fa59576bf426/manager/0.log" Feb 03 13:14:56 crc kubenswrapper[4679]: I0203 13:14:56.555727 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b48565759-btpsb_8e1b318f-e557-49ba-91c9-3489ccb19246/webhook-server/0.log" Feb 03 13:14:56 crc kubenswrapper[4679]: I0203 13:14:56.601191 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-72kj8_7c2c7dcb-cc91-4794-baaf-f766c8e7cd55/kube-rbac-proxy/0.log" Feb 03 13:14:57 crc kubenswrapper[4679]: I0203 13:14:57.273848 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-72kj8_7c2c7dcb-cc91-4794-baaf-f766c8e7cd55/speaker/0.log" Feb 03 13:14:57 crc kubenswrapper[4679]: I0203 13:14:57.330862 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jqlc5_cd3be7cf-aa64-4ffc-8b96-a567d85a2c35/frr/0.log" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.193037 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-22859"] Feb 03 13:15:00 crc kubenswrapper[4679]: E0203 13:15:00.194024 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="extract-content" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.194042 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="extract-content" Feb 03 13:15:00 crc kubenswrapper[4679]: E0203 13:15:00.194077 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.194084 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4679]: E0203 13:15:00.194096 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="extract-utilities" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.194104 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="extract-utilities" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.194281 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b2fa4c-bbce-450f-96b5-77177c7339a9" containerName="registry-server" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.195044 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.198026 4679 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.198146 4679 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.201795 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-22859"] Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.295892 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkmq\" (UniqueName: \"kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.296016 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.296069 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.398331 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.398535 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkmq\" (UniqueName: \"kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.398661 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.399731 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.404693 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.416826 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkmq\" (UniqueName: \"kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq\") pod \"collect-profiles-29502075-22859\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.530723 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:00 crc kubenswrapper[4679]: I0203 13:15:00.965256 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-22859"] Feb 03 13:15:01 crc kubenswrapper[4679]: I0203 13:15:01.845803 4679 generic.go:334] "Generic (PLEG): container finished" podID="97a71008-d42a-438c-858a-83940346c86d" containerID="39ce3867346611201749a61b27c65dd2f7fb3735921adc125f36eea1bfbaa7d2" exitCode=0 Feb 03 13:15:01 crc kubenswrapper[4679]: I0203 13:15:01.845864 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" event={"ID":"97a71008-d42a-438c-858a-83940346c86d","Type":"ContainerDied","Data":"39ce3867346611201749a61b27c65dd2f7fb3735921adc125f36eea1bfbaa7d2"} Feb 03 13:15:01 crc kubenswrapper[4679]: I0203 13:15:01.846163 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" event={"ID":"97a71008-d42a-438c-858a-83940346c86d","Type":"ContainerStarted","Data":"4e558e02249f872f7157856dd901babff07e4fab189005a2d323ea3188433590"} Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.217279 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.355051 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkmq\" (UniqueName: \"kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq\") pod \"97a71008-d42a-438c-858a-83940346c86d\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.355095 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume\") pod \"97a71008-d42a-438c-858a-83940346c86d\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.355184 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume\") pod \"97a71008-d42a-438c-858a-83940346c86d\" (UID: \"97a71008-d42a-438c-858a-83940346c86d\") " Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.355827 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume" (OuterVolumeSpecName: "config-volume") pod "97a71008-d42a-438c-858a-83940346c86d" (UID: "97a71008-d42a-438c-858a-83940346c86d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.360477 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97a71008-d42a-438c-858a-83940346c86d" (UID: "97a71008-d42a-438c-858a-83940346c86d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.361181 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq" (OuterVolumeSpecName: "kube-api-access-vmkmq") pod "97a71008-d42a-438c-858a-83940346c86d" (UID: "97a71008-d42a-438c-858a-83940346c86d"). InnerVolumeSpecName "kube-api-access-vmkmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.458152 4679 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97a71008-d42a-438c-858a-83940346c86d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.458219 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkmq\" (UniqueName: \"kubernetes.io/projected/97a71008-d42a-438c-858a-83940346c86d-kube-api-access-vmkmq\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.458248 4679 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97a71008-d42a-438c-858a-83940346c86d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.873878 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" event={"ID":"97a71008-d42a-438c-858a-83940346c86d","Type":"ContainerDied","Data":"4e558e02249f872f7157856dd901babff07e4fab189005a2d323ea3188433590"} Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.873945 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-22859" Feb 03 13:15:03 crc kubenswrapper[4679]: I0203 13:15:03.873949 4679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e558e02249f872f7157856dd901babff07e4fab189005a2d323ea3188433590" Feb 03 13:15:04 crc kubenswrapper[4679]: I0203 13:15:04.307185 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n"] Feb 03 13:15:04 crc kubenswrapper[4679]: I0203 13:15:04.315938 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502030-8v67n"] Feb 03 13:15:06 crc kubenswrapper[4679]: I0203 13:15:06.222521 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7056a891-e884-4966-bbd1-8b22706082f1" path="/var/lib/kubelet/pods/7056a891-e884-4966-bbd1-8b22706082f1/volumes" Feb 03 13:15:09 crc kubenswrapper[4679]: I0203 13:15:09.659567 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:15:09 crc kubenswrapper[4679]: I0203 13:15:09.869833 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:15:09 crc kubenswrapper[4679]: I0203 13:15:09.883081 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:15:09 crc kubenswrapper[4679]: I0203 13:15:09.889119 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.038189 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/util/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.038640 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/extract/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.065412 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmzmds_c3851540-2643-497e-a54c-d7543287ebca/pull/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.214710 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.366472 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.379937 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.424481 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.598615 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/extract/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.599288 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/util/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.626590 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rb7nd_e945486d-e54e-4fab-a0a2-5564e08ce31c/pull/0.log" Feb 03 13:15:10 crc kubenswrapper[4679]: I0203 13:15:10.931016 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.048915 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.053486 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.068401 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.230304 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-content/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.262148 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.469206 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.699608 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.699820 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.713177 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.816334 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wwj2t_aacd0fa8-7197-42cd-8023-62d7085d86a5/registry-server/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.897449 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-utilities/0.log" Feb 03 13:15:11 crc kubenswrapper[4679]: I0203 13:15:11.923015 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/extract-content/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.126268 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sbtmr_6d1001e8-7956-4d94-aed4-c482940134f4/marketplace-operator/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.258237 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.444958 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.457648 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.555008 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.578919 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vdzrp_f718cd3c-d9e9-45d7-abf0-989f2392abf8/registry-server/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.737802 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-content/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.739496 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/extract-utilities/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.858528 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k87kr_39d42fa0-e1ef-45cd-9ebe-7423dcb3c12b/registry-server/0.log" Feb 03 13:15:12 crc kubenswrapper[4679]: I0203 13:15:12.956409 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-utilities/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.118777 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-content/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.127993 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-content/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.128230 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-utilities/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.337485 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-utilities/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.365085 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/extract-content/0.log" Feb 03 13:15:13 crc kubenswrapper[4679]: I0203 13:15:13.523667 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llh4q_e535daa2-f963-4d21-aed1-118cce28fb76/registry-server/0.log" Feb 03 13:15:36 crc kubenswrapper[4679]: I0203 13:15:36.317612 4679 scope.go:117] "RemoveContainer" containerID="40cabf5de210551ac48649714025c28c3117f49881f770191c7d12dada34eb93" Feb 03 13:16:36 crc kubenswrapper[4679]: I0203 13:16:36.735753 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:16:36 crc kubenswrapper[4679]: I0203 13:16:36.736465 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:16:57 crc kubenswrapper[4679]: I0203 13:16:57.892856 4679 generic.go:334] "Generic (PLEG): container finished" podID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerID="39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b" exitCode=0 Feb 03 13:16:57 crc kubenswrapper[4679]: I0203 13:16:57.893042 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfpww/must-gather-nf8c2" event={"ID":"51323700-3c5a-476b-8470-9fb3abfd8c51","Type":"ContainerDied","Data":"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b"} Feb 03 13:16:57 crc kubenswrapper[4679]: I0203 13:16:57.894062 4679 scope.go:117] "RemoveContainer" containerID="39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b" Feb 03 13:16:58 crc kubenswrapper[4679]: I0203 13:16:58.539289 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfpww_must-gather-nf8c2_51323700-3c5a-476b-8470-9fb3abfd8c51/gather/0.log" Feb 03 13:17:06 crc kubenswrapper[4679]: I0203 13:17:06.735916 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:17:06 crc kubenswrapper[4679]: I0203 13:17:06.736638 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.288287 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfpww/must-gather-nf8c2"] Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.289156 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pfpww/must-gather-nf8c2" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="copy" containerID="cri-o://0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9" gracePeriod=2 Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.299045 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfpww/must-gather-nf8c2"] Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.786635 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfpww_must-gather-nf8c2_51323700-3c5a-476b-8470-9fb3abfd8c51/copy/0.log" Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.788735 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.929191 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b22zh\" (UniqueName: \"kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh\") pod \"51323700-3c5a-476b-8470-9fb3abfd8c51\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.929773 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output\") pod \"51323700-3c5a-476b-8470-9fb3abfd8c51\" (UID: \"51323700-3c5a-476b-8470-9fb3abfd8c51\") " Feb 03 13:17:10 crc kubenswrapper[4679]: I0203 13:17:10.936161 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh" (OuterVolumeSpecName: "kube-api-access-b22zh") pod "51323700-3c5a-476b-8470-9fb3abfd8c51" (UID: "51323700-3c5a-476b-8470-9fb3abfd8c51"). InnerVolumeSpecName "kube-api-access-b22zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.016127 4679 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfpww_must-gather-nf8c2_51323700-3c5a-476b-8470-9fb3abfd8c51/copy/0.log" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.016614 4679 generic.go:334] "Generic (PLEG): container finished" podID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerID="0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9" exitCode=143 Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.016689 4679 scope.go:117] "RemoveContainer" containerID="0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.016734 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfpww/must-gather-nf8c2" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.032480 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b22zh\" (UniqueName: \"kubernetes.io/projected/51323700-3c5a-476b-8470-9fb3abfd8c51-kube-api-access-b22zh\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.042135 4679 scope.go:117] "RemoveContainer" containerID="39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.095070 4679 scope.go:117] "RemoveContainer" containerID="0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9" Feb 03 13:17:11 crc kubenswrapper[4679]: E0203 13:17:11.095909 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9\": container with ID starting with 0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9 not found: ID does not exist" containerID="0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.096026 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9"} err="failed to get container status \"0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9\": rpc error: code = NotFound desc = could not find container \"0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9\": container with ID starting with 0c87f1280105b36b0542b5b63cda990b73b68ab0e2ff6af192c2f6c7b3e35df9 not found: ID does not exist" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.096057 4679 scope.go:117] "RemoveContainer" containerID="39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b" Feb 03 13:17:11 crc kubenswrapper[4679]: E0203 13:17:11.096451 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b\": container with ID starting with 39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b not found: ID does not exist" containerID="39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.096502 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b"} err="failed to get container status \"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b\": rpc error: code = NotFound desc = could not find container \"39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b\": container with ID starting with 39d4cf652133e715d37301c3837f880e83a7d21693a983b566fb7d3ecefd849b not found: ID does not exist" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.112978 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "51323700-3c5a-476b-8470-9fb3abfd8c51" (UID: "51323700-3c5a-476b-8470-9fb3abfd8c51"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:11 crc kubenswrapper[4679]: I0203 13:17:11.134206 4679 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/51323700-3c5a-476b-8470-9fb3abfd8c51-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:12 crc kubenswrapper[4679]: I0203 13:17:12.221116 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" path="/var/lib/kubelet/pods/51323700-3c5a-476b-8470-9fb3abfd8c51/volumes" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.009839 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:18 crc kubenswrapper[4679]: E0203 13:17:18.011553 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a71008-d42a-438c-858a-83940346c86d" containerName="collect-profiles" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011577 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a71008-d42a-438c-858a-83940346c86d" containerName="collect-profiles" Feb 03 13:17:18 crc kubenswrapper[4679]: E0203 13:17:18.011603 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="gather" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011613 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="gather" Feb 03 13:17:18 crc kubenswrapper[4679]: E0203 13:17:18.011641 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="copy" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011653 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="copy" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011942 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="gather" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011968 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="51323700-3c5a-476b-8470-9fb3abfd8c51" containerName="copy" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.011980 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a71008-d42a-438c-858a-83940346c86d" containerName="collect-profiles" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.014211 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.024383 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.067145 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.067279 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzf7t\" (UniqueName: \"kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.067330 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.169102 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.169225 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzf7t\" (UniqueName: \"kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.169262 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.169784 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.169992 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.187769 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzf7t\" (UniqueName: \"kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t\") pod \"community-operators-fzz6z\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.339241 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:18 crc kubenswrapper[4679]: I0203 13:17:18.919384 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:19 crc kubenswrapper[4679]: I0203 13:17:19.093221 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerStarted","Data":"5ac895d0a294d9c9e40f2b14d928c070d7cfc15b8fc50e493a9da9084c6e1090"} Feb 03 13:17:20 crc kubenswrapper[4679]: I0203 13:17:20.108324 4679 generic.go:334] "Generic (PLEG): container finished" podID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerID="5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786" exitCode=0 Feb 03 13:17:20 crc kubenswrapper[4679]: I0203 13:17:20.108674 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerDied","Data":"5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786"} Feb 03 13:17:22 crc kubenswrapper[4679]: I0203 13:17:22.134438 4679 generic.go:334] "Generic (PLEG): container finished" podID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerID="4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80" exitCode=0 Feb 03 13:17:22 crc kubenswrapper[4679]: I0203 13:17:22.135036 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerDied","Data":"4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80"} Feb 03 13:17:23 crc kubenswrapper[4679]: I0203 13:17:23.145426 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerStarted","Data":"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683"} Feb 03 13:17:23 crc kubenswrapper[4679]: I0203 13:17:23.172854 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fzz6z" podStartSLOduration=3.718405374 podStartE2EDuration="6.172831106s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:20.11073057 +0000 UTC m=+4312.585626668" lastFinishedPulling="2026-02-03 13:17:22.565156312 +0000 UTC m=+4315.040052400" observedRunningTime="2026-02-03 13:17:23.171945014 +0000 UTC m=+4315.646841122" watchObservedRunningTime="2026-02-03 13:17:23.172831106 +0000 UTC m=+4315.647727204" Feb 03 13:17:28 crc kubenswrapper[4679]: I0203 13:17:28.343451 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:28 crc kubenswrapper[4679]: I0203 13:17:28.344092 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:28 crc kubenswrapper[4679]: I0203 13:17:28.391014 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:29 crc kubenswrapper[4679]: I0203 13:17:29.267338 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:29 crc kubenswrapper[4679]: I0203 13:17:29.315207 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:31 crc kubenswrapper[4679]: I0203 13:17:31.218319 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fzz6z" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="registry-server" containerID="cri-o://0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683" gracePeriod=2 Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.083597 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.143847 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content\") pod \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.144169 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzf7t\" (UniqueName: \"kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t\") pod \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.144427 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities\") pod \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\" (UID: \"6fe5414c-ef6c-420a-925e-e6bc32e0f800\") " Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.145217 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities" (OuterVolumeSpecName: "utilities") pod "6fe5414c-ef6c-420a-925e-e6bc32e0f800" (UID: "6fe5414c-ef6c-420a-925e-e6bc32e0f800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.184580 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t" (OuterVolumeSpecName: "kube-api-access-mzf7t") pod "6fe5414c-ef6c-420a-925e-e6bc32e0f800" (UID: "6fe5414c-ef6c-420a-925e-e6bc32e0f800"). InnerVolumeSpecName "kube-api-access-mzf7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.219471 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fe5414c-ef6c-420a-925e-e6bc32e0f800" (UID: "6fe5414c-ef6c-420a-925e-e6bc32e0f800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.228586 4679 generic.go:334] "Generic (PLEG): container finished" podID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerID="0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683" exitCode=0 Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.228624 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerDied","Data":"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683"} Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.228648 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzz6z" event={"ID":"6fe5414c-ef6c-420a-925e-e6bc32e0f800","Type":"ContainerDied","Data":"5ac895d0a294d9c9e40f2b14d928c070d7cfc15b8fc50e493a9da9084c6e1090"} Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.228664 4679 scope.go:117] "RemoveContainer" containerID="0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.228664 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzz6z" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.255417 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.255783 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzf7t\" (UniqueName: \"kubernetes.io/projected/6fe5414c-ef6c-420a-925e-e6bc32e0f800-kube-api-access-mzf7t\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.255798 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe5414c-ef6c-420a-925e-e6bc32e0f800-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.268209 4679 scope.go:117] "RemoveContainer" containerID="4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.273993 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.292697 4679 scope.go:117] "RemoveContainer" containerID="5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.296692 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fzz6z"] Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.329475 4679 scope.go:117] "RemoveContainer" containerID="0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683" Feb 03 13:17:32 crc kubenswrapper[4679]: E0203 13:17:32.330337 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683\": container with ID starting with 0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683 not found: ID does not exist" containerID="0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.330390 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683"} err="failed to get container status \"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683\": rpc error: code = NotFound desc = could not find container \"0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683\": container with ID starting with 0ddc3f8c08be216a9a41d74732ab9aa2917eda0457e6b5c76c6e533711d9a683 not found: ID does not exist" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.330418 4679 scope.go:117] "RemoveContainer" containerID="4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80" Feb 03 13:17:32 crc kubenswrapper[4679]: E0203 13:17:32.331023 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80\": container with ID starting with 4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80 not found: ID does not exist" containerID="4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.331093 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80"} err="failed to get container status \"4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80\": rpc error: code = NotFound desc = could not find container \"4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80\": container with ID starting with 4ff4c98eb2643fbef84193f9b79f98c7d459af4a052cb8eaf5c0ed70e281ae80 not found: ID does not exist" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.331146 4679 scope.go:117] "RemoveContainer" containerID="5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786" Feb 03 13:17:32 crc kubenswrapper[4679]: E0203 13:17:32.331505 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786\": container with ID starting with 5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786 not found: ID does not exist" containerID="5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786" Feb 03 13:17:32 crc kubenswrapper[4679]: I0203 13:17:32.331539 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786"} err="failed to get container status \"5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786\": rpc error: code = NotFound desc = could not find container \"5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786\": container with ID starting with 5afe585f09721fe62699eba4a7ec6a157642b0edb42aae86b022fa6ba99c3786 not found: ID does not exist" Feb 03 13:17:34 crc kubenswrapper[4679]: I0203 13:17:34.230500 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" path="/var/lib/kubelet/pods/6fe5414c-ef6c-420a-925e-e6bc32e0f800/volumes" Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.397437 4679 scope.go:117] "RemoveContainer" containerID="eff4d812890e36c0011ecc8258dd66d45c749ded027b9274aaff09ae3f5a7983" Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.737065 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.737150 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.737211 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.738155 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:17:36 crc kubenswrapper[4679]: I0203 13:17:36.738228 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489" gracePeriod=600 Feb 03 13:17:37 crc kubenswrapper[4679]: I0203 13:17:37.280472 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489" exitCode=0 Feb 03 13:17:37 crc kubenswrapper[4679]: I0203 13:17:37.280998 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489"} Feb 03 13:17:37 crc kubenswrapper[4679]: I0203 13:17:37.281072 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerStarted","Data":"86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110"} Feb 03 13:17:37 crc kubenswrapper[4679]: I0203 13:17:37.281135 4679 scope.go:117] "RemoveContainer" containerID="a6583ccc7593e89b8daf5c3f238e0fb2ea052fba5b2a595419a2fd213becca0d" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.134859 4679 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:18:50 crc kubenswrapper[4679]: E0203 13:18:50.137086 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="extract-utilities" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.137199 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="extract-utilities" Feb 03 13:18:50 crc kubenswrapper[4679]: E0203 13:18:50.137315 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="extract-content" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.137426 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="extract-content" Feb 03 13:18:50 crc kubenswrapper[4679]: E0203 13:18:50.137517 4679 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="registry-server" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.137581 4679 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="registry-server" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.137840 4679 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe5414c-ef6c-420a-925e-e6bc32e0f800" containerName="registry-server" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.139464 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.151670 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.221088 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8pm\" (UniqueName: \"kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.221275 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.221320 4679 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.322883 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.323021 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8pm\" (UniqueName: \"kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.323278 4679 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.323764 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.324439 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.351544 4679 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8pm\" (UniqueName: \"kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm\") pod \"certified-operators-4slcz\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.469061 4679 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:18:50 crc kubenswrapper[4679]: I0203 13:18:50.973816 4679 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:18:51 crc kubenswrapper[4679]: I0203 13:18:51.034268 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerStarted","Data":"62d270b9648fa4a9fff3373effc894eafbaebe8d504e1603667cd6f4938afbea"} Feb 03 13:18:52 crc kubenswrapper[4679]: I0203 13:18:52.045793 4679 generic.go:334] "Generic (PLEG): container finished" podID="b3da4e79-4dd9-4734-9017-0505f2dbc179" containerID="9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547" exitCode=0 Feb 03 13:18:52 crc kubenswrapper[4679]: I0203 13:18:52.045864 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerDied","Data":"9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547"} Feb 03 13:18:52 crc kubenswrapper[4679]: I0203 13:18:52.049004 4679 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:18:53 crc kubenswrapper[4679]: I0203 13:18:53.054852 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerStarted","Data":"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61"} Feb 03 13:18:54 crc kubenswrapper[4679]: I0203 13:18:54.066026 4679 generic.go:334] "Generic (PLEG): container finished" podID="b3da4e79-4dd9-4734-9017-0505f2dbc179" containerID="dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61" exitCode=0 Feb 03 13:18:54 crc kubenswrapper[4679]: I0203 13:18:54.066087 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerDied","Data":"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61"} Feb 03 13:18:55 crc kubenswrapper[4679]: I0203 13:18:55.075569 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerStarted","Data":"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994"} Feb 03 13:18:55 crc kubenswrapper[4679]: I0203 13:18:55.099863 4679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4slcz" podStartSLOduration=2.44370419 podStartE2EDuration="5.099634298s" podCreationTimestamp="2026-02-03 13:18:50 +0000 UTC" firstStartedPulling="2026-02-03 13:18:52.048702023 +0000 UTC m=+4404.523598111" lastFinishedPulling="2026-02-03 13:18:54.704632131 +0000 UTC m=+4407.179528219" observedRunningTime="2026-02-03 13:18:55.090476186 +0000 UTC m=+4407.565372294" watchObservedRunningTime="2026-02-03 13:18:55.099634298 +0000 UTC m=+4407.574530386" Feb 03 13:19:00 crc kubenswrapper[4679]: I0203 13:19:00.470437 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:00 crc kubenswrapper[4679]: I0203 13:19:00.470998 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:00 crc kubenswrapper[4679]: I0203 13:19:00.521770 4679 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:01 crc kubenswrapper[4679]: I0203 13:19:01.177703 4679 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:01 crc kubenswrapper[4679]: I0203 13:19:01.227813 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.148750 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4slcz" podUID="b3da4e79-4dd9-4734-9017-0505f2dbc179" containerName="registry-server" containerID="cri-o://2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994" gracePeriod=2 Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.722738 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.890251 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities\") pod \"b3da4e79-4dd9-4734-9017-0505f2dbc179\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.890540 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8pm\" (UniqueName: \"kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm\") pod \"b3da4e79-4dd9-4734-9017-0505f2dbc179\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.890588 4679 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content\") pod \"b3da4e79-4dd9-4734-9017-0505f2dbc179\" (UID: \"b3da4e79-4dd9-4734-9017-0505f2dbc179\") " Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.891945 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities" (OuterVolumeSpecName: "utilities") pod "b3da4e79-4dd9-4734-9017-0505f2dbc179" (UID: "b3da4e79-4dd9-4734-9017-0505f2dbc179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.904040 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm" (OuterVolumeSpecName: "kube-api-access-ws8pm") pod "b3da4e79-4dd9-4734-9017-0505f2dbc179" (UID: "b3da4e79-4dd9-4734-9017-0505f2dbc179"). InnerVolumeSpecName "kube-api-access-ws8pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4679]: I0203 13:19:03.940289 4679 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3da4e79-4dd9-4734-9017-0505f2dbc179" (UID: "b3da4e79-4dd9-4734-9017-0505f2dbc179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:03.993259 4679 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:03.993303 4679 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws8pm\" (UniqueName: \"kubernetes.io/projected/b3da4e79-4dd9-4734-9017-0505f2dbc179-kube-api-access-ws8pm\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:03.993318 4679 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3da4e79-4dd9-4734-9017-0505f2dbc179-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.156836 4679 generic.go:334] "Generic (PLEG): container finished" podID="b3da4e79-4dd9-4734-9017-0505f2dbc179" containerID="2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994" exitCode=0 Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.156914 4679 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4slcz" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.158241 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerDied","Data":"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994"} Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.158431 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4slcz" event={"ID":"b3da4e79-4dd9-4734-9017-0505f2dbc179","Type":"ContainerDied","Data":"62d270b9648fa4a9fff3373effc894eafbaebe8d504e1603667cd6f4938afbea"} Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.158500 4679 scope.go:117] "RemoveContainer" containerID="2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.188135 4679 scope.go:117] "RemoveContainer" containerID="dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.196938 4679 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.210507 4679 scope.go:117] "RemoveContainer" containerID="9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.222813 4679 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4slcz"] Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.261660 4679 scope.go:117] "RemoveContainer" containerID="2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994" Feb 03 13:19:04 crc kubenswrapper[4679]: E0203 13:19:04.262207 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994\": container with ID starting with 2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994 not found: ID does not exist" containerID="2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.262255 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994"} err="failed to get container status \"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994\": rpc error: code = NotFound desc = could not find container \"2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994\": container with ID starting with 2b422aa5506d64b04e825802115a278a5d8e57eff72a28dca779864b8fe32994 not found: ID does not exist" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.262288 4679 scope.go:117] "RemoveContainer" containerID="dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61" Feb 03 13:19:04 crc kubenswrapper[4679]: E0203 13:19:04.262850 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61\": container with ID starting with dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61 not found: ID does not exist" containerID="dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.262887 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61"} err="failed to get container status \"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61\": rpc error: code = NotFound desc = could not find container \"dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61\": container with ID starting with dd39023f90a60ccc6bbdd23f0c334cb8e0d966de2dbcc301c135c96fec41da61 not found: ID does not exist" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.262915 4679 scope.go:117] "RemoveContainer" containerID="9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547" Feb 03 13:19:04 crc kubenswrapper[4679]: E0203 13:19:04.263203 4679 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547\": container with ID starting with 9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547 not found: ID does not exist" containerID="9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547" Feb 03 13:19:04 crc kubenswrapper[4679]: I0203 13:19:04.263229 4679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547"} err="failed to get container status \"9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547\": rpc error: code = NotFound desc = could not find container \"9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547\": container with ID starting with 9baf743dc6b45e66ea8d1267196425746dec950eb96212e9afa488fe84740547 not found: ID does not exist" Feb 03 13:19:06 crc kubenswrapper[4679]: I0203 13:19:06.222387 4679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3da4e79-4dd9-4734-9017-0505f2dbc179" path="/var/lib/kubelet/pods/b3da4e79-4dd9-4734-9017-0505f2dbc179/volumes" Feb 03 13:19:36 crc kubenswrapper[4679]: I0203 13:19:36.510813 4679 scope.go:117] "RemoveContainer" containerID="202d4dfb8757262bf99942ed7598599b499cf9645d7482db110ce0789dcdf88d" Feb 03 13:20:06 crc kubenswrapper[4679]: I0203 13:20:06.735298 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:20:06 crc kubenswrapper[4679]: I0203 13:20:06.735932 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:20:36 crc kubenswrapper[4679]: I0203 13:20:36.573316 4679 scope.go:117] "RemoveContainer" containerID="017cb766d6e4d529eb42c72633cf7070bbbdcbc9ba717805a780454ca87de705" Feb 03 13:20:36 crc kubenswrapper[4679]: I0203 13:20:36.600733 4679 scope.go:117] "RemoveContainer" containerID="13006fe0bac39884986445166aa8b0b88d8a66dcfe580d8a1f94799e42a7ef7a" Feb 03 13:20:36 crc kubenswrapper[4679]: I0203 13:20:36.737434 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:20:36 crc kubenswrapper[4679]: I0203 13:20:36.737834 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:21:06 crc kubenswrapper[4679]: I0203 13:21:06.735678 4679 patch_prober.go:28] interesting pod/machine-config-daemon-8qvcg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:21:06 crc kubenswrapper[4679]: I0203 13:21:06.736455 4679 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:21:06 crc kubenswrapper[4679]: I0203 13:21:06.736543 4679 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" Feb 03 13:21:06 crc kubenswrapper[4679]: I0203 13:21:06.737317 4679 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110"} pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:21:06 crc kubenswrapper[4679]: I0203 13:21:06.737390 4679 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerName="machine-config-daemon" containerID="cri-o://86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110" gracePeriod=600 Feb 03 13:21:07 crc kubenswrapper[4679]: I0203 13:21:07.295598 4679 generic.go:334] "Generic (PLEG): container finished" podID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" containerID="86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110" exitCode=0 Feb 03 13:21:07 crc kubenswrapper[4679]: I0203 13:21:07.295643 4679 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" event={"ID":"6483dca4-cab1-4db4-9aa9-0b616c6e9cbb","Type":"ContainerDied","Data":"86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110"} Feb 03 13:21:07 crc kubenswrapper[4679]: I0203 13:21:07.295681 4679 scope.go:117] "RemoveContainer" containerID="c5b1c9744e25f4bb15dd56ed525ad2f13e5bf23858ad1bcf8f8343650f3e4489" Feb 03 13:21:07 crc kubenswrapper[4679]: E0203 13:21:07.414106 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:21:08 crc kubenswrapper[4679]: I0203 13:21:08.307595 4679 scope.go:117] "RemoveContainer" containerID="86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110" Feb 03 13:21:08 crc kubenswrapper[4679]: E0203 13:21:08.307916 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb" Feb 03 13:21:21 crc kubenswrapper[4679]: I0203 13:21:21.212322 4679 scope.go:117] "RemoveContainer" containerID="86c89a0c7abc4cc13a118e0782b7bbb3a78e0981a0ae17899f3a51fe23522110" Feb 03 13:21:21 crc kubenswrapper[4679]: E0203 13:21:21.213006 4679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8qvcg_openshift-machine-config-operator(6483dca4-cab1-4db4-9aa9-0b616c6e9cbb)\"" pod="openshift-machine-config-operator/machine-config-daemon-8qvcg" podUID="6483dca4-cab1-4db4-9aa9-0b616c6e9cbb"